00:00:00.000 Started by upstream project "autotest-per-patch" build number 127181 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.538 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.553 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.566 Checking out Revision 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b (FETCH_HEAD) 00:00:06.566 > git config core.sparsecheckout # timeout=10 00:00:06.577 > git read-tree -mu HEAD # timeout=10 00:00:06.595 > git checkout -f 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=5 00:00:06.613 Commit message: "jjb/jobs: add SPDK_TEST_SETUP flag into configuration" 00:00:06.613 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:06.709 [Pipeline] Start of Pipeline 00:00:06.723 [Pipeline] library 00:00:06.725 Loading library shm_lib@master 00:00:06.725 Library shm_lib@master is cached. Copying from home. 00:00:06.740 [Pipeline] node 00:00:06.750 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:00:06.751 [Pipeline] { 00:00:06.763 [Pipeline] catchError 00:00:06.765 [Pipeline] { 00:00:06.779 [Pipeline] wrap 00:00:06.789 [Pipeline] { 00:00:06.798 [Pipeline] stage 00:00:06.801 [Pipeline] { (Prologue) 00:00:06.826 [Pipeline] echo 00:00:06.828 Node: VM-host-SM0 00:00:06.836 [Pipeline] cleanWs 00:00:06.845 [WS-CLEANUP] Deleting project workspace... 00:00:06.845 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.852 [WS-CLEANUP] done 00:00:07.039 [Pipeline] setCustomBuildProperty 00:00:07.111 [Pipeline] httpRequest 00:00:07.137 [Pipeline] echo 00:00:07.139 Sorcerer 10.211.164.101 is alive 00:00:07.147 [Pipeline] httpRequest 00:00:07.151 HttpMethod: GET 00:00:07.151 URL: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:07.152 Sending request to url: http://10.211.164.101/packages/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:07.159 Response Code: HTTP/1.1 200 OK 00:00:07.160 Success: Status code 200 is in the accepted range: 200,404 00:00:07.160 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:14.611 [Pipeline] sh 00:00:14.887 + tar --no-same-owner -xf jbp_8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b.tar.gz 00:00:14.898 [Pipeline] httpRequest 00:00:14.913 [Pipeline] echo 00:00:14.914 Sorcerer 10.211.164.101 is alive 00:00:14.921 [Pipeline] httpRequest 00:00:14.924 HttpMethod: GET 00:00:14.925 URL: http://10.211.164.101/packages/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:00:14.925 Sending request to url: http://10.211.164.101/packages/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:00:14.935 Response Code: HTTP/1.1 200 OK 00:00:14.935 Success: Status code 200 is in the accepted range: 200,404 00:00:14.935 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:01:28.304 [Pipeline] sh 00:01:28.583 + tar --no-same-owner -xf spdk_50fa6ca312c882d0fe919228ad8d3bfd61579d43.tar.gz 00:01:31.125 [Pipeline] sh 00:01:31.403 + git -C spdk log --oneline -n5 00:01:31.403 50fa6ca31 raid: allow to skip rebuild when adding a base bdev 00:01:31.403 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:31.403 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:31.403 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:31.403 d005e023b raid: fix empty slot not updated in sb after resize 00:01:31.422 [Pipeline] writeFile 00:01:31.438 [Pipeline] sh 00:01:31.720 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.732 [Pipeline] sh 00:01:32.011 + cat autorun-spdk.conf 00:01:32.011 SPDK_TEST_UNITTEST=1 00:01:32.011 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.011 SPDK_TEST_NVME=1 00:01:32.011 SPDK_TEST_BLOCKDEV=1 00:01:32.011 SPDK_RUN_ASAN=1 00:01:32.011 SPDK_RUN_UBSAN=1 00:01:32.011 SPDK_TEST_RAID5=1 00:01:32.011 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.017 RUN_NIGHTLY=0 00:01:32.020 [Pipeline] } 00:01:32.037 [Pipeline] // stage 00:01:32.054 [Pipeline] stage 00:01:32.057 [Pipeline] { (Run VM) 00:01:32.073 [Pipeline] sh 00:01:32.354 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.354 + echo 'Start stage prepare_nvme.sh' 00:01:32.354 Start stage prepare_nvme.sh 00:01:32.354 + [[ -n 6 ]] 00:01:32.354 + disk_prefix=ex6 00:01:32.354 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_3 ]] 00:01:32.354 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf ]] 00:01:32.354 + source /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf 00:01:32.354 ++ SPDK_TEST_UNITTEST=1 00:01:32.354 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.354 ++ SPDK_TEST_NVME=1 00:01:32.354 ++ SPDK_TEST_BLOCKDEV=1 00:01:32.354 ++ SPDK_RUN_ASAN=1 00:01:32.354 ++ SPDK_RUN_UBSAN=1 00:01:32.354 ++ SPDK_TEST_RAID5=1 00:01:32.354 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.354 ++ RUN_NIGHTLY=0 00:01:32.354 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:32.354 + nvme_files=() 00:01:32.354 + declare -A nvme_files 00:01:32.354 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.354 + nvme_files['nvme.img']=5G 00:01:32.354 + nvme_files['nvme-cmb.img']=5G 00:01:32.354 + nvme_files['nvme-multi0.img']=4G 00:01:32.354 + nvme_files['nvme-multi1.img']=4G 00:01:32.354 + nvme_files['nvme-multi2.img']=4G 00:01:32.354 + nvme_files['nvme-openstack.img']=8G 00:01:32.354 + nvme_files['nvme-zns.img']=5G 00:01:32.354 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.354 + (( SPDK_TEST_FTL == 1 )) 00:01:32.354 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.354 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:32.354 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.354 + for nvme in "${!nvme_files[@]}" 00:01:32.354 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:32.613 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.613 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:32.613 + echo 'End stage prepare_nvme.sh' 00:01:32.613 End stage prepare_nvme.sh 00:01:32.625 [Pipeline] sh 00:01:32.906 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.906 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2204 00:01:32.906 00:01:32.906 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/scripts/vagrant 00:01:32.906 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk 00:01:32.906 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:32.906 HELP=0 00:01:32.906 DRY_RUN=0 00:01:32.906 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:01:32.906 NVME_DISKS_TYPE=nvme, 00:01:32.906 NVME_AUTO_CREATE=0 00:01:32.906 NVME_DISKS_NAMESPACES=, 00:01:32.906 NVME_CMB=, 00:01:32.906 NVME_PMR=, 00:01:32.906 NVME_ZNS=, 00:01:32.906 NVME_MS=, 00:01:32.906 NVME_FDP=, 00:01:32.906 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:32.906 SPDK_VAGRANT_VMCPU=10 00:01:32.906 SPDK_VAGRANT_VMRAM=12288 00:01:32.906 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.906 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.906 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.906 SPDK_OPENSTACK_NETWORK=0 00:01:32.906 VAGRANT_PACKAGE_BOX=0 00:01:32.906 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:32.906 FORCE_DISTRO=true 00:01:32.906 VAGRANT_BOX_VERSION= 00:01:32.906 EXTRA_VAGRANTFILES= 00:01:32.906 NIC_MODEL=e1000 00:01:32.906 00:01:32.906 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt' 00:01:32.906 /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_3 00:01:36.194 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.453 ==> default: Creating image (snapshot of base box volume). 00:01:36.713 ==> default: Creating domain with the following settings... 00:01:36.713 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1721915065_e7ea275bc1d05009531f 00:01:36.713 ==> default: -- Domain type: kvm 00:01:36.713 ==> default: -- Cpus: 10 00:01:36.713 ==> default: -- Feature: acpi 00:01:36.713 ==> default: -- Feature: apic 00:01:36.713 ==> default: -- Feature: pae 00:01:36.713 ==> default: -- Memory: 12288M 00:01:36.713 ==> default: -- Memory Backing: hugepages: 00:01:36.713 ==> default: -- Management MAC: 00:01:36.713 ==> default: -- Loader: 00:01:36.713 ==> default: -- Nvram: 00:01:36.713 ==> default: -- Base box: spdk/ubuntu2204 00:01:36.713 ==> default: -- Storage pool: default 00:01:36.713 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1721915065_e7ea275bc1d05009531f.img (20G) 00:01:36.713 ==> default: -- Volume Cache: default 00:01:36.713 ==> default: -- Kernel: 00:01:36.713 ==> default: -- Initrd: 00:01:36.713 ==> default: -- Graphics Type: vnc 00:01:36.713 ==> default: -- Graphics Port: -1 00:01:36.713 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.713 ==> default: -- Graphics Password: Not defined 00:01:36.713 ==> default: -- Video Type: cirrus 00:01:36.713 ==> default: -- Video VRAM: 9216 00:01:36.713 ==> default: -- Sound Type: 00:01:36.713 ==> default: -- Keymap: en-us 00:01:36.713 ==> default: -- TPM Path: 00:01:36.713 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.713 ==> default: -- Command line args: 00:01:36.713 ==> default: -> value=-device, 00:01:36.713 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.713 ==> default: -> value=-drive, 00:01:36.713 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.713 ==> default: -> value=-device, 00:01:36.713 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.972 ==> default: Creating shared folders metadata... 00:01:36.972 ==> default: Starting domain. 00:01:38.874 ==> default: Waiting for domain to get an IP address... 00:01:48.850 ==> default: Waiting for SSH to become available... 00:01:50.751 ==> default: Configuring and enabling network interfaces... 00:01:56.037 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:00.289 ==> default: Mounting SSHFS shared folder... 00:02:01.662 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:01.662 ==> default: Checking Mount.. 00:02:02.228 ==> default: Folder Successfully Mounted! 00:02:02.228 ==> default: Running provisioner: file... 00:02:02.793 default: ~/.gitconfig => .gitconfig 00:02:03.050 00:02:03.050 SUCCESS! 00:02:03.050 00:02:03.050 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:03.050 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:03.050 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:03.050 00:02:03.060 [Pipeline] } 00:02:03.079 [Pipeline] // stage 00:02:03.088 [Pipeline] dir 00:02:03.089 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt 00:02:03.091 [Pipeline] { 00:02:03.105 [Pipeline] catchError 00:02:03.107 [Pipeline] { 00:02:03.122 [Pipeline] sh 00:02:03.403 + vagrant ssh-config --host vagrant 00:02:03.403 + sed -ne /^Host/,$p 00:02:03.403 + tee ssh_conf 00:02:06.685 Host vagrant 00:02:06.685 HostName 192.168.121.199 00:02:06.685 User vagrant 00:02:06.685 Port 22 00:02:06.685 UserKnownHostsFile /dev/null 00:02:06.685 StrictHostKeyChecking no 00:02:06.685 PasswordAuthentication no 00:02:06.685 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:06.685 IdentitiesOnly yes 00:02:06.685 LogLevel FATAL 00:02:06.685 ForwardAgent yes 00:02:06.685 ForwardX11 yes 00:02:06.685 00:02:06.699 [Pipeline] withEnv 00:02:06.701 [Pipeline] { 00:02:06.717 [Pipeline] sh 00:02:07.048 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:07.048 source /etc/os-release 00:02:07.048 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.048 # Minimal, systemd-like check. 00:02:07.048 if [[ -e /.dockerenv ]]; then 00:02:07.048 # Clear garbage from the node's name: 00:02:07.048 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.048 # $HOSTNAME is the actual container id 00:02:07.048 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.048 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.048 # We can assume this is a mount from a host where container is running, 00:02:07.048 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.048 container="$(< /etc/hostname) ($agent)" 00:02:07.048 else 00:02:07.048 # Fallback 00:02:07.048 container=$agent 00:02:07.048 fi 00:02:07.048 fi 00:02:07.048 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.048 00:02:07.318 [Pipeline] } 00:02:07.339 [Pipeline] // withEnv 00:02:07.349 [Pipeline] setCustomBuildProperty 00:02:07.366 [Pipeline] stage 00:02:07.368 [Pipeline] { (Tests) 00:02:07.389 [Pipeline] sh 00:02:07.670 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.941 [Pipeline] sh 00:02:08.217 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:08.491 [Pipeline] timeout 00:02:08.491 Timeout set to expire in 1 hr 30 min 00:02:08.493 [Pipeline] { 00:02:08.510 [Pipeline] sh 00:02:08.789 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:09.355 HEAD is now at 50fa6ca31 raid: allow to skip rebuild when adding a base bdev 00:02:09.370 [Pipeline] sh 00:02:09.649 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.922 [Pipeline] sh 00:02:10.201 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:10.475 [Pipeline] sh 00:02:10.753 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:11.012 ++ readlink -f spdk_repo 00:02:11.012 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:11.012 + [[ -n /home/vagrant/spdk_repo ]] 00:02:11.012 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:11.012 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:11.012 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:11.012 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:11.012 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:11.012 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:11.012 + cd /home/vagrant/spdk_repo 00:02:11.012 + source /etc/os-release 00:02:11.012 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:11.012 ++ NAME=Ubuntu 00:02:11.012 ++ VERSION_ID=22.04 00:02:11.012 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:11.012 ++ VERSION_CODENAME=jammy 00:02:11.012 ++ ID=ubuntu 00:02:11.012 ++ ID_LIKE=debian 00:02:11.012 ++ HOME_URL=https://www.ubuntu.com/ 00:02:11.012 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:11.012 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:11.012 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:11.012 ++ UBUNTU_CODENAME=jammy 00:02:11.012 + uname -a 00:02:11.012 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:11.012 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:02:11.315 Hugepages 00:02:11.315 node hugesize free / total 00:02:11.315 node0 1048576kB 0 / 0 00:02:11.315 node0 2048kB 0 / 0 00:02:11.315 00:02:11.315 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.315 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.315 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.315 + rm -f /tmp/spdk-ld-path 00:02:11.315 + source autorun-spdk.conf 00:02:11.315 ++ SPDK_TEST_UNITTEST=1 00:02:11.315 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.315 ++ SPDK_TEST_NVME=1 00:02:11.315 ++ SPDK_TEST_BLOCKDEV=1 00:02:11.315 ++ SPDK_RUN_ASAN=1 00:02:11.315 ++ SPDK_RUN_UBSAN=1 00:02:11.315 ++ SPDK_TEST_RAID5=1 00:02:11.315 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.315 ++ RUN_NIGHTLY=0 00:02:11.315 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.315 + [[ -n '' ]] 00:02:11.315 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:11.315 + for M in /var/spdk/build-*-manifest.txt 00:02:11.315 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.315 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.315 + for M in /var/spdk/build-*-manifest.txt 00:02:11.315 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.315 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.315 ++ uname 00:02:11.315 + [[ Linux == \L\i\n\u\x ]] 00:02:11.315 + sudo dmesg -T 00:02:11.316 + sudo dmesg --clear 00:02:11.316 + dmesg_pid=2156 00:02:11.316 + sudo dmesg -Tw 00:02:11.316 + [[ Ubuntu == FreeBSD ]] 00:02:11.316 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.316 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.316 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.316 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.316 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.316 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.316 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.316 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:11.316 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:11.316 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:11.316 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.316 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.316 Test configuration: 00:02:11.316 SPDK_TEST_UNITTEST=1 00:02:11.316 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.316 SPDK_TEST_NVME=1 00:02:11.316 SPDK_TEST_BLOCKDEV=1 00:02:11.316 SPDK_RUN_ASAN=1 00:02:11.316 SPDK_RUN_UBSAN=1 00:02:11.316 SPDK_TEST_RAID5=1 00:02:11.316 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.574 RUN_NIGHTLY=0 13:45:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.574 13:45:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.574 13:45:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.574 13:45:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.574 13:45:00 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:11.574 13:45:00 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:11.574 13:45:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:11.574 13:45:00 -- paths/export.sh@5 -- $ export PATH 00:02:11.574 13:45:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:11.574 13:45:00 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.574 13:45:00 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:11.574 13:45:00 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721915100.XXXXXX 00:02:11.574 13:45:00 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721915100.N1IfTg 00:02:11.574 13:45:00 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:11.574 13:45:00 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:11.574 13:45:00 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:11.574 13:45:00 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.574 13:45:00 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.574 13:45:00 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:11.574 13:45:00 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:11.574 13:45:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.574 13:45:00 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:02:11.574 13:45:00 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:11.574 13:45:00 -- pm/common@17 -- $ local monitor 00:02:11.574 13:45:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.574 13:45:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.574 13:45:00 -- pm/common@25 -- $ sleep 1 00:02:11.574 13:45:00 -- pm/common@21 -- $ date +%s 00:02:11.574 13:45:00 -- pm/common@21 -- $ date +%s 00:02:11.574 13:45:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915100 00:02:11.574 13:45:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721915100 00:02:11.574 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915100_collect-vmstat.pm.log 00:02:11.574 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721915100_collect-cpu-load.pm.log 00:02:12.509 13:45:01 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:12.509 13:45:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.509 13:45:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.509 13:45:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.509 13:45:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.509 Thu Jul 25 13:45:01 UTC 2024 00:02:12.509 13:45:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.509 v24.09-pre-322-g50fa6ca31 00:02:12.509 13:45:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:12.509 13:45:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:12.509 13:45:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:12.509 13:45:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.509 13:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.509 ************************************ 00:02:12.509 START TEST asan 00:02:12.509 ************************************ 00:02:12.509 13:45:01 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:12.509 using asan 00:02:12.509 00:02:12.509 real 0m0.000s 00:02:12.509 user 0m0.000s 00:02:12.509 sys 0m0.000s 00:02:12.509 13:45:01 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:12.509 ************************************ 00:02:12.509 13:45:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.509 END TEST asan 00:02:12.509 ************************************ 00:02:12.509 13:45:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.509 13:45:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.509 13:45:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:12.509 13:45:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.509 13:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.509 ************************************ 00:02:12.509 START TEST ubsan 00:02:12.509 ************************************ 00:02:12.509 using ubsan 00:02:12.509 13:45:01 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:12.509 00:02:12.509 real 0m0.000s 00:02:12.509 user 0m0.000s 00:02:12.509 sys 0m0.000s 00:02:12.509 13:45:01 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:12.509 ************************************ 00:02:12.509 13:45:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.509 END TEST ubsan 00:02:12.509 ************************************ 00:02:12.768 13:45:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:12.768 13:45:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.768 13:45:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.768 13:45:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.768 13:45:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.768 13:45:01 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:12.768 13:45:01 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:12.768 13:45:01 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:02:12.768 13:45:01 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:12.768 13:45:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:12.768 13:45:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.768 ************************************ 00:02:12.768 START TEST unittest_build 00:02:12.768 ************************************ 00:02:12.768 13:45:01 unittest_build -- common/autotest_common.sh@1125 -- $ _unittest_build 00:02:12.768 13:45:01 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:02:12.768 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:12.768 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.335 Using 'verbs' RDMA provider 00:02:28.826 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:41.028 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:41.028 Creating mk/config.mk...done. 00:02:41.028 Creating mk/cc.flags.mk...done. 00:02:41.028 Type 'make' to build. 00:02:41.028 13:45:29 unittest_build -- common/autobuild_common.sh@415 -- $ make -j10 00:02:41.028 make[1]: Nothing to be done for 'all'. 00:02:55.897 The Meson build system 00:02:55.897 Version: 1.4.0 00:02:55.897 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:55.897 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:55.897 Build type: native build 00:02:55.897 Program cat found: YES (/usr/bin/cat) 00:02:55.897 Project name: DPDK 00:02:55.897 Project version: 24.03.0 00:02:55.897 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:55.897 C linker for the host machine: cc ld.bfd 2.38 00:02:55.897 Host machine cpu family: x86_64 00:02:55.897 Host machine cpu: x86_64 00:02:55.897 Message: ## Building in Developer Mode ## 00:02:55.897 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:55.897 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:55.897 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:55.897 Program python3 found: YES (/usr/bin/python3) 00:02:55.897 Program cat found: YES (/usr/bin/cat) 00:02:55.897 Compiler for C supports arguments -march=native: YES 00:02:55.897 Checking for size of "void *" : 8 00:02:55.897 Checking for size of "void *" : 8 (cached) 00:02:55.897 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:55.897 Library m found: YES 00:02:55.897 Library numa found: YES 00:02:55.897 Has header "numaif.h" : YES 00:02:55.897 Library fdt found: NO 00:02:55.898 Library execinfo found: NO 00:02:55.898 Has header "execinfo.h" : YES 00:02:55.898 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:55.898 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:55.898 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:55.898 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:55.898 Run-time dependency openssl found: YES 3.0.2 00:02:55.898 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:55.898 Library pcap found: NO 00:02:55.898 Compiler for C supports arguments -Wcast-qual: YES 00:02:55.898 Compiler for C supports arguments -Wdeprecated: YES 00:02:55.898 Compiler for C supports arguments -Wformat: YES 00:02:55.898 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:55.898 Compiler for C supports arguments -Wformat-security: YES 00:02:55.898 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:55.898 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:55.898 Compiler for C supports arguments -Wnested-externs: YES 00:02:55.898 Compiler for C supports arguments -Wold-style-definition: YES 00:02:55.898 Compiler for C supports arguments -Wpointer-arith: YES 00:02:55.898 Compiler for C supports arguments -Wsign-compare: YES 00:02:55.898 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:55.898 Compiler for C supports arguments -Wundef: YES 00:02:55.898 Compiler for C supports arguments -Wwrite-strings: YES 00:02:55.898 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:55.898 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:55.898 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:55.898 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:55.898 Program objdump found: YES (/usr/bin/objdump) 00:02:55.898 Compiler for C supports arguments -mavx512f: YES 00:02:55.898 Checking if "AVX512 checking" compiles: YES 00:02:55.898 Fetching value of define "__SSE4_2__" : 1 00:02:55.898 Fetching value of define "__AES__" : 1 00:02:55.898 Fetching value of define "__AVX__" : 1 00:02:55.898 Fetching value of define "__AVX2__" : 1 00:02:55.898 Fetching value of define "__AVX512BW__" : (undefined) 00:02:55.898 Fetching value of define "__AVX512CD__" : (undefined) 00:02:55.898 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:55.898 Fetching value of define "__AVX512F__" : (undefined) 00:02:55.898 Fetching value of define "__AVX512VL__" : (undefined) 00:02:55.898 Fetching value of define "__PCLMUL__" : 1 00:02:55.898 Fetching value of define "__RDRND__" : 1 00:02:55.898 Fetching value of define "__RDSEED__" : 1 00:02:55.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:55.898 Fetching value of define "__znver1__" : (undefined) 00:02:55.898 Fetching value of define "__znver2__" : (undefined) 00:02:55.898 Fetching value of define "__znver3__" : (undefined) 00:02:55.898 Fetching value of define "__znver4__" : (undefined) 00:02:55.898 Library asan found: YES 00:02:55.898 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:55.898 Message: lib/log: Defining dependency "log" 00:02:55.898 Message: lib/kvargs: Defining dependency "kvargs" 00:02:55.898 Message: lib/telemetry: Defining dependency "telemetry" 00:02:55.898 Library rt found: YES 00:02:55.898 Checking for function "getentropy" : NO 00:02:55.898 Message: lib/eal: Defining dependency "eal" 00:02:55.898 Message: lib/ring: Defining dependency "ring" 00:02:55.898 Message: lib/rcu: Defining dependency "rcu" 00:02:55.898 Message: lib/mempool: Defining dependency "mempool" 00:02:55.898 Message: lib/mbuf: Defining dependency "mbuf" 00:02:55.898 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:55.898 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:55.898 Compiler for C supports arguments -mpclmul: YES 00:02:55.898 Compiler for C supports arguments -maes: YES 00:02:55.898 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.898 Compiler for C supports arguments -mavx512bw: YES 00:02:55.898 Compiler for C supports arguments -mavx512dq: YES 00:02:55.898 Compiler for C supports arguments -mavx512vl: YES 00:02:55.898 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:55.898 Compiler for C supports arguments -mavx2: YES 00:02:55.898 Compiler for C supports arguments -mavx: YES 00:02:55.898 Message: lib/net: Defining dependency "net" 00:02:55.898 Message: lib/meter: Defining dependency "meter" 00:02:55.898 Message: lib/ethdev: Defining dependency "ethdev" 00:02:55.898 Message: lib/pci: Defining dependency "pci" 00:02:55.898 Message: lib/cmdline: Defining dependency "cmdline" 00:02:55.898 Message: lib/hash: Defining dependency "hash" 00:02:55.898 Message: lib/timer: Defining dependency "timer" 00:02:55.898 Message: lib/compressdev: Defining dependency "compressdev" 00:02:55.898 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:55.898 Message: lib/dmadev: Defining dependency "dmadev" 00:02:55.898 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:55.898 Message: lib/power: Defining dependency "power" 00:02:55.898 Message: lib/reorder: Defining dependency "reorder" 00:02:55.898 Message: lib/security: Defining dependency "security" 00:02:55.898 Has header "linux/userfaultfd.h" : YES 00:02:55.898 Has header "linux/vduse.h" : YES 00:02:55.898 Message: lib/vhost: Defining dependency "vhost" 00:02:55.898 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:55.898 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:55.898 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.898 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:55.898 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:55.898 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:55.898 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:55.898 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:55.898 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:55.898 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:55.898 Program doxygen found: YES (/usr/bin/doxygen) 00:02:55.898 Configuring doxy-api-html.conf using configuration 00:02:55.898 Configuring doxy-api-man.conf using configuration 00:02:55.898 Program mandb found: YES (/usr/bin/mandb) 00:02:55.898 Program sphinx-build found: NO 00:02:55.898 Configuring rte_build_config.h using configuration 00:02:55.898 Message: 00:02:55.898 ================= 00:02:55.898 Applications Enabled 00:02:55.898 ================= 00:02:55.898 00:02:55.898 apps: 00:02:55.898 00:02:55.898 00:02:55.898 Message: 00:02:55.898 ================= 00:02:55.898 Libraries Enabled 00:02:55.898 ================= 00:02:55.898 00:02:55.898 libs: 00:02:55.898 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:55.898 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:55.898 cryptodev, dmadev, power, reorder, security, vhost, 00:02:55.898 00:02:55.898 Message: 00:02:55.898 =============== 00:02:55.898 Drivers Enabled 00:02:55.898 =============== 00:02:55.898 00:02:55.898 common: 00:02:55.898 00:02:55.898 bus: 00:02:55.898 pci, vdev, 00:02:55.898 mempool: 00:02:55.898 ring, 00:02:55.898 dma: 00:02:55.898 00:02:55.898 net: 00:02:55.898 00:02:55.898 crypto: 00:02:55.898 00:02:55.898 compress: 00:02:55.898 00:02:55.898 vdpa: 00:02:55.898 00:02:55.898 00:02:55.898 Message: 00:02:55.898 ================= 00:02:55.898 Content Skipped 00:02:55.898 ================= 00:02:55.898 00:02:55.898 apps: 00:02:55.898 dumpcap: explicitly disabled via build config 00:02:55.898 graph: explicitly disabled via build config 00:02:55.898 pdump: explicitly disabled via build config 00:02:55.898 proc-info: explicitly disabled via build config 00:02:55.898 test-acl: explicitly disabled via build config 00:02:55.898 test-bbdev: explicitly disabled via build config 00:02:55.898 test-cmdline: explicitly disabled via build config 00:02:55.898 test-compress-perf: explicitly disabled via build config 00:02:55.898 test-crypto-perf: explicitly disabled via build config 00:02:55.898 test-dma-perf: explicitly disabled via build config 00:02:55.898 test-eventdev: explicitly disabled via build config 00:02:55.899 test-fib: explicitly disabled via build config 00:02:55.899 test-flow-perf: explicitly disabled via build config 00:02:55.899 test-gpudev: explicitly disabled via build config 00:02:55.899 test-mldev: explicitly disabled via build config 00:02:55.899 test-pipeline: explicitly disabled via build config 00:02:55.899 test-pmd: explicitly disabled via build config 00:02:55.899 test-regex: explicitly disabled via build config 00:02:55.899 test-sad: explicitly disabled via build config 00:02:55.899 test-security-perf: explicitly disabled via build config 00:02:55.899 00:02:55.899 libs: 00:02:55.899 argparse: explicitly disabled via build config 00:02:55.899 metrics: explicitly disabled via build config 00:02:55.899 acl: explicitly disabled via build config 00:02:55.899 bbdev: explicitly disabled via build config 00:02:55.899 bitratestats: explicitly disabled via build config 00:02:55.899 bpf: explicitly disabled via build config 00:02:55.899 cfgfile: explicitly disabled via build config 00:02:55.899 distributor: explicitly disabled via build config 00:02:55.899 efd: explicitly disabled via build config 00:02:55.899 eventdev: explicitly disabled via build config 00:02:55.899 dispatcher: explicitly disabled via build config 00:02:55.899 gpudev: explicitly disabled via build config 00:02:55.899 gro: explicitly disabled via build config 00:02:55.899 gso: explicitly disabled via build config 00:02:55.899 ip_frag: explicitly disabled via build config 00:02:55.899 jobstats: explicitly disabled via build config 00:02:55.899 latencystats: explicitly disabled via build config 00:02:55.899 lpm: explicitly disabled via build config 00:02:55.899 member: explicitly disabled via build config 00:02:55.899 pcapng: explicitly disabled via build config 00:02:55.899 rawdev: explicitly disabled via build config 00:02:55.899 regexdev: explicitly disabled via build config 00:02:55.899 mldev: explicitly disabled via build config 00:02:55.899 rib: explicitly disabled via build config 00:02:55.899 sched: explicitly disabled via build config 00:02:55.899 stack: explicitly disabled via build config 00:02:55.899 ipsec: explicitly disabled via build config 00:02:55.899 pdcp: explicitly disabled via build config 00:02:55.899 fib: explicitly disabled via build config 00:02:55.899 port: explicitly disabled via build config 00:02:55.899 pdump: explicitly disabled via build config 00:02:55.899 table: explicitly disabled via build config 00:02:55.899 pipeline: explicitly disabled via build config 00:02:55.899 graph: explicitly disabled via build config 00:02:55.899 node: explicitly disabled via build config 00:02:55.899 00:02:55.899 drivers: 00:02:55.899 common/cpt: not in enabled drivers build config 00:02:55.899 common/dpaax: not in enabled drivers build config 00:02:55.899 common/iavf: not in enabled drivers build config 00:02:55.899 common/idpf: not in enabled drivers build config 00:02:55.899 common/ionic: not in enabled drivers build config 00:02:55.899 common/mvep: not in enabled drivers build config 00:02:55.899 common/octeontx: not in enabled drivers build config 00:02:55.899 bus/auxiliary: not in enabled drivers build config 00:02:55.899 bus/cdx: not in enabled drivers build config 00:02:55.899 bus/dpaa: not in enabled drivers build config 00:02:55.899 bus/fslmc: not in enabled drivers build config 00:02:55.899 bus/ifpga: not in enabled drivers build config 00:02:55.899 bus/platform: not in enabled drivers build config 00:02:55.899 bus/uacce: not in enabled drivers build config 00:02:55.899 bus/vmbus: not in enabled drivers build config 00:02:55.899 common/cnxk: not in enabled drivers build config 00:02:55.899 common/mlx5: not in enabled drivers build config 00:02:55.899 common/nfp: not in enabled drivers build config 00:02:55.899 common/nitrox: not in enabled drivers build config 00:02:55.899 common/qat: not in enabled drivers build config 00:02:55.899 common/sfc_efx: not in enabled drivers build config 00:02:55.899 mempool/bucket: not in enabled drivers build config 00:02:55.899 mempool/cnxk: not in enabled drivers build config 00:02:55.899 mempool/dpaa: not in enabled drivers build config 00:02:55.899 mempool/dpaa2: not in enabled drivers build config 00:02:55.899 mempool/octeontx: not in enabled drivers build config 00:02:55.899 mempool/stack: not in enabled drivers build config 00:02:55.899 dma/cnxk: not in enabled drivers build config 00:02:55.899 dma/dpaa: not in enabled drivers build config 00:02:55.899 dma/dpaa2: not in enabled drivers build config 00:02:55.899 dma/hisilicon: not in enabled drivers build config 00:02:55.899 dma/idxd: not in enabled drivers build config 00:02:55.899 dma/ioat: not in enabled drivers build config 00:02:55.899 dma/skeleton: not in enabled drivers build config 00:02:55.899 net/af_packet: not in enabled drivers build config 00:02:55.899 net/af_xdp: not in enabled drivers build config 00:02:55.899 net/ark: not in enabled drivers build config 00:02:55.899 net/atlantic: not in enabled drivers build config 00:02:55.899 net/avp: not in enabled drivers build config 00:02:55.899 net/axgbe: not in enabled drivers build config 00:02:55.899 net/bnx2x: not in enabled drivers build config 00:02:55.899 net/bnxt: not in enabled drivers build config 00:02:55.899 net/bonding: not in enabled drivers build config 00:02:55.899 net/cnxk: not in enabled drivers build config 00:02:55.899 net/cpfl: not in enabled drivers build config 00:02:55.899 net/cxgbe: not in enabled drivers build config 00:02:55.899 net/dpaa: not in enabled drivers build config 00:02:55.899 net/dpaa2: not in enabled drivers build config 00:02:55.899 net/e1000: not in enabled drivers build config 00:02:55.899 net/ena: not in enabled drivers build config 00:02:55.899 net/enetc: not in enabled drivers build config 00:02:55.899 net/enetfec: not in enabled drivers build config 00:02:55.899 net/enic: not in enabled drivers build config 00:02:55.899 net/failsafe: not in enabled drivers build config 00:02:55.899 net/fm10k: not in enabled drivers build config 00:02:55.899 net/gve: not in enabled drivers build config 00:02:55.899 net/hinic: not in enabled drivers build config 00:02:55.899 net/hns3: not in enabled drivers build config 00:02:55.899 net/i40e: not in enabled drivers build config 00:02:55.899 net/iavf: not in enabled drivers build config 00:02:55.899 net/ice: not in enabled drivers build config 00:02:55.899 net/idpf: not in enabled drivers build config 00:02:55.899 net/igc: not in enabled drivers build config 00:02:55.899 net/ionic: not in enabled drivers build config 00:02:55.899 net/ipn3ke: not in enabled drivers build config 00:02:55.899 net/ixgbe: not in enabled drivers build config 00:02:55.899 net/mana: not in enabled drivers build config 00:02:55.899 net/memif: not in enabled drivers build config 00:02:55.899 net/mlx4: not in enabled drivers build config 00:02:55.899 net/mlx5: not in enabled drivers build config 00:02:55.899 net/mvneta: not in enabled drivers build config 00:02:55.899 net/mvpp2: not in enabled drivers build config 00:02:55.899 net/netvsc: not in enabled drivers build config 00:02:55.899 net/nfb: not in enabled drivers build config 00:02:55.899 net/nfp: not in enabled drivers build config 00:02:55.899 net/ngbe: not in enabled drivers build config 00:02:55.899 net/null: not in enabled drivers build config 00:02:55.899 net/octeontx: not in enabled drivers build config 00:02:55.899 net/octeon_ep: not in enabled drivers build config 00:02:55.899 net/pcap: not in enabled drivers build config 00:02:55.899 net/pfe: not in enabled drivers build config 00:02:55.899 net/qede: not in enabled drivers build config 00:02:55.899 net/ring: not in enabled drivers build config 00:02:55.899 net/sfc: not in enabled drivers build config 00:02:55.899 net/softnic: not in enabled drivers build config 00:02:55.899 net/tap: not in enabled drivers build config 00:02:55.899 net/thunderx: not in enabled drivers build config 00:02:55.899 net/txgbe: not in enabled drivers build config 00:02:55.899 net/vdev_netvsc: not in enabled drivers build config 00:02:55.899 net/vhost: not in enabled drivers build config 00:02:55.899 net/virtio: not in enabled drivers build config 00:02:55.899 net/vmxnet3: not in enabled drivers build config 00:02:55.899 raw/*: missing internal dependency, "rawdev" 00:02:55.899 crypto/armv8: not in enabled drivers build config 00:02:55.899 crypto/bcmfs: not in enabled drivers build config 00:02:55.899 crypto/caam_jr: not in enabled drivers build config 00:02:55.899 crypto/ccp: not in enabled drivers build config 00:02:55.899 crypto/cnxk: not in enabled drivers build config 00:02:55.899 crypto/dpaa_sec: not in enabled drivers build config 00:02:55.899 crypto/dpaa2_sec: not in enabled drivers build config 00:02:55.899 crypto/ipsec_mb: not in enabled drivers build config 00:02:55.899 crypto/mlx5: not in enabled drivers build config 00:02:55.899 crypto/mvsam: not in enabled drivers build config 00:02:55.900 crypto/nitrox: not in enabled drivers build config 00:02:55.900 crypto/null: not in enabled drivers build config 00:02:55.900 crypto/octeontx: not in enabled drivers build config 00:02:55.900 crypto/openssl: not in enabled drivers build config 00:02:55.900 crypto/scheduler: not in enabled drivers build config 00:02:55.900 crypto/uadk: not in enabled drivers build config 00:02:55.900 crypto/virtio: not in enabled drivers build config 00:02:55.900 compress/isal: not in enabled drivers build config 00:02:55.900 compress/mlx5: not in enabled drivers build config 00:02:55.900 compress/nitrox: not in enabled drivers build config 00:02:55.900 compress/octeontx: not in enabled drivers build config 00:02:55.900 compress/zlib: not in enabled drivers build config 00:02:55.900 regex/*: missing internal dependency, "regexdev" 00:02:55.900 ml/*: missing internal dependency, "mldev" 00:02:55.900 vdpa/ifc: not in enabled drivers build config 00:02:55.900 vdpa/mlx5: not in enabled drivers build config 00:02:55.900 vdpa/nfp: not in enabled drivers build config 00:02:55.900 vdpa/sfc: not in enabled drivers build config 00:02:55.900 event/*: missing internal dependency, "eventdev" 00:02:55.900 baseband/*: missing internal dependency, "bbdev" 00:02:55.900 gpu/*: missing internal dependency, "gpudev" 00:02:55.900 00:02:55.900 00:02:55.900 Build targets in project: 85 00:02:55.900 00:02:55.900 DPDK 24.03.0 00:02:55.900 00:02:55.900 User defined options 00:02:55.900 buildtype : debug 00:02:55.900 default_library : static 00:02:55.900 libdir : lib 00:02:55.900 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.900 b_sanitize : address 00:02:55.900 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:55.900 c_link_args : 00:02:55.900 cpu_instruction_set: native 00:02:55.900 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:55.900 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,argparse,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:55.900 enable_docs : false 00:02:55.900 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:55.900 enable_kmods : false 00:02:55.900 max_lcores : 128 00:02:55.900 tests : false 00:02:55.900 00:02:55.900 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.900 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:55.900 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:55.900 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:55.900 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:55.900 [4/268] Linking static target lib/librte_kvargs.a 00:02:55.900 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:55.900 [6/268] Linking static target lib/librte_log.a 00:02:55.900 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.900 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.900 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.900 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.900 [11/268] Linking static target lib/librte_telemetry.a 00:02:55.900 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.900 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.900 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.900 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.900 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.900 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.159 [18/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.159 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.159 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.159 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.418 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.418 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.418 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.676 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.676 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:56.676 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.677 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.935 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.935 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.935 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.935 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.193 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.193 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.193 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.193 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.193 [37/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.193 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.193 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.452 [40/268] Linking target lib/librte_log.so.24.1 00:02:57.452 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.452 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.452 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.452 [44/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.452 [45/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:57.452 [46/268] Linking target lib/librte_kvargs.so.24.1 00:02:57.711 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.711 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.711 [49/268] Linking target lib/librte_telemetry.so.24.1 00:02:57.711 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.711 [51/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:57.711 [52/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:57.970 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.970 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:57.970 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:57.970 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.229 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.229 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.229 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.229 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.229 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.229 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.229 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.229 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.488 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.488 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.488 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.488 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.747 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.747 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.747 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.747 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.747 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.747 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.747 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.747 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.747 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.006 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.006 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.006 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.006 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.006 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.006 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.265 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.265 [85/268] Linking static target lib/librte_eal.a 00:02:59.265 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.265 [87/268] Linking static target lib/librte_ring.a 00:02:59.265 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.265 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.265 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.524 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.524 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.524 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.524 [94/268] Linking static target lib/librte_mempool.a 00:02:59.524 [95/268] Linking static target lib/librte_rcu.a 00:02:59.524 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.783 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.783 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.783 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.783 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.783 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.043 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.043 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.043 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.043 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.043 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.043 [107/268] Linking static target lib/librte_net.a 00:03:00.302 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.302 [109/268] Linking static target lib/librte_mbuf.a 00:03:00.302 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.302 [111/268] Linking static target lib/librte_meter.a 00:03:00.302 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.302 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.302 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.302 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.302 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.560 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.560 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.824 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.824 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.824 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.824 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.103 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.103 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.103 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:01.103 [126/268] Linking static target lib/librte_pci.a 00:03:01.362 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.362 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:01.362 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.362 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.362 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.362 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.362 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.362 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.362 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.362 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.362 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.362 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.362 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:01.362 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.362 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.620 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.620 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:01.620 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.620 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.878 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.878 [147/268] Linking static target lib/librte_cmdline.a 00:03:01.878 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:01.878 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.878 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:01.878 [151/268] Linking static target lib/librte_timer.a 00:03:01.878 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.138 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.138 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.138 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.397 [156/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.397 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.397 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.397 [159/268] Linking static target lib/librte_compressdev.a 00:03:02.397 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.397 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.656 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:02.656 [163/268] Linking static target lib/librte_hash.a 00:03:02.656 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.656 [165/268] Linking static target lib/librte_ethdev.a 00:03:02.656 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:02.656 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.656 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.656 [169/268] Linking static target lib/librte_dmadev.a 00:03:02.914 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.914 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.914 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.914 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.914 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.173 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.173 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:03.173 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.173 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.173 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.431 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.431 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.431 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.431 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.431 [184/268] Linking static target lib/librte_cryptodev.a 00:03:03.689 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.689 [186/268] Linking static target lib/librte_power.a 00:03:03.689 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.689 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.689 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.689 [190/268] Linking static target lib/librte_reorder.a 00:03:03.948 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.948 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.948 [193/268] Linking static target lib/librte_security.a 00:03:04.207 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.207 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.207 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.483 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.483 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.744 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.744 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.744 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.744 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.744 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.744 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.002 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.002 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.261 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.261 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.261 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.261 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.261 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.520 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.520 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.520 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.520 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.520 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.520 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.520 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:05.520 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.520 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.521 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.779 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.779 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.779 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.779 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.779 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:06.037 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.979 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.979 [229/268] Linking target lib/librte_eal.so.24.1 00:03:07.238 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:07.238 [231/268] Linking target lib/librte_dmadev.so.24.1 00:03:07.238 [232/268] Linking target lib/librte_pci.so.24.1 00:03:07.238 [233/268] Linking target lib/librte_meter.so.24.1 00:03:07.238 [234/268] Linking target lib/librte_timer.so.24.1 00:03:07.238 [235/268] Linking target lib/librte_ring.so.24.1 00:03:07.238 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:07.238 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.238 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.238 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.238 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.496 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.496 [242/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.496 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:07.496 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:07.496 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.496 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.496 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.496 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.496 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:07.754 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.754 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:07.754 [252/268] Linking target lib/librte_net.so.24.1 00:03:07.754 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:07.754 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:08.012 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.012 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:08.012 [257/268] Linking target lib/librte_hash.so.24.1 00:03:08.012 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:08.012 [259/268] Linking target lib/librte_security.so.24.1 00:03:08.012 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:08.578 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.836 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:08.836 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:09.094 [264/268] Linking target lib/librte_power.so.24.1 00:03:11.036 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.036 [266/268] Linking static target lib/librte_vhost.a 00:03:12.405 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.405 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:12.406 INFO: autodetecting backend as ninja 00:03:12.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.791 CC lib/log/log.o 00:03:13.791 CC lib/log/log_flags.o 00:03:13.791 CC lib/ut_mock/mock.o 00:03:13.791 CC lib/log/log_deprecated.o 00:03:13.791 CC lib/ut/ut.o 00:03:13.791 LIB libspdk_log.a 00:03:13.791 LIB libspdk_ut.a 00:03:13.791 LIB libspdk_ut_mock.a 00:03:14.050 CXX lib/trace_parser/trace.o 00:03:14.050 CC lib/util/base64.o 00:03:14.050 CC lib/util/bit_array.o 00:03:14.050 CC lib/util/crc16.o 00:03:14.050 CC lib/util/cpuset.o 00:03:14.050 CC lib/util/crc32c.o 00:03:14.050 CC lib/util/crc32.o 00:03:14.050 CC lib/ioat/ioat.o 00:03:14.050 CC lib/dma/dma.o 00:03:14.050 CC lib/vfio_user/host/vfio_user_pci.o 00:03:14.050 CC lib/util/crc32_ieee.o 00:03:14.050 CC lib/util/crc64.o 00:03:14.050 CC lib/vfio_user/host/vfio_user.o 00:03:14.050 LIB libspdk_dma.a 00:03:14.309 CC lib/util/dif.o 00:03:14.309 CC lib/util/fd.o 00:03:14.309 CC lib/util/fd_group.o 00:03:14.309 CC lib/util/file.o 00:03:14.309 CC lib/util/hexlify.o 00:03:14.309 CC lib/util/iov.o 00:03:14.309 LIB libspdk_ioat.a 00:03:14.309 CC lib/util/math.o 00:03:14.309 CC lib/util/net.o 00:03:14.309 CC lib/util/pipe.o 00:03:14.309 CC lib/util/strerror_tls.o 00:03:14.309 LIB libspdk_vfio_user.a 00:03:14.567 CC lib/util/string.o 00:03:14.567 CC lib/util/uuid.o 00:03:14.567 CC lib/util/xor.o 00:03:14.567 CC lib/util/zipf.o 00:03:14.824 LIB libspdk_util.a 00:03:15.083 LIB libspdk_trace_parser.a 00:03:15.083 CC lib/rdma_utils/rdma_utils.o 00:03:15.083 CC lib/conf/conf.o 00:03:15.083 CC lib/rdma_provider/common.o 00:03:15.083 CC lib/vmd/vmd.o 00:03:15.083 CC lib/vmd/led.o 00:03:15.083 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.083 CC lib/idxd/idxd.o 00:03:15.083 CC lib/json/json_parse.o 00:03:15.083 CC lib/env_dpdk/env.o 00:03:15.083 CC lib/env_dpdk/memory.o 00:03:15.342 CC lib/env_dpdk/pci.o 00:03:15.342 CC lib/json/json_util.o 00:03:15.342 LIB libspdk_rdma_provider.a 00:03:15.342 LIB libspdk_conf.a 00:03:15.342 CC lib/env_dpdk/init.o 00:03:15.342 CC lib/env_dpdk/threads.o 00:03:15.342 LIB libspdk_rdma_utils.a 00:03:15.342 CC lib/json/json_write.o 00:03:15.342 CC lib/env_dpdk/pci_ioat.o 00:03:15.600 CC lib/env_dpdk/pci_virtio.o 00:03:15.600 CC lib/env_dpdk/pci_vmd.o 00:03:15.600 CC lib/env_dpdk/pci_idxd.o 00:03:15.600 CC lib/env_dpdk/pci_event.o 00:03:15.600 CC lib/env_dpdk/sigbus_handler.o 00:03:15.600 CC lib/env_dpdk/pci_dpdk.o 00:03:15.600 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.600 LIB libspdk_json.a 00:03:15.858 CC lib/idxd/idxd_user.o 00:03:15.858 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.858 LIB libspdk_vmd.a 00:03:15.858 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.858 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.858 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.858 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.116 LIB libspdk_idxd.a 00:03:16.116 LIB libspdk_jsonrpc.a 00:03:16.375 CC lib/rpc/rpc.o 00:03:16.633 LIB libspdk_rpc.a 00:03:16.633 LIB libspdk_env_dpdk.a 00:03:16.891 CC lib/trace/trace.o 00:03:16.891 CC lib/notify/notify_rpc.o 00:03:16.891 CC lib/trace/trace_rpc.o 00:03:16.891 CC lib/notify/notify.o 00:03:16.891 CC lib/trace/trace_flags.o 00:03:16.891 CC lib/keyring/keyring.o 00:03:16.891 CC lib/keyring/keyring_rpc.o 00:03:17.149 LIB libspdk_notify.a 00:03:17.149 LIB libspdk_keyring.a 00:03:17.149 LIB libspdk_trace.a 00:03:17.407 CC lib/sock/sock.o 00:03:17.407 CC lib/sock/sock_rpc.o 00:03:17.407 CC lib/thread/thread.o 00:03:17.407 CC lib/thread/iobuf.o 00:03:17.973 LIB libspdk_sock.a 00:03:18.232 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.232 CC lib/nvme/nvme_ctrlr.o 00:03:18.232 CC lib/nvme/nvme_fabric.o 00:03:18.232 CC lib/nvme/nvme_ns_cmd.o 00:03:18.232 CC lib/nvme/nvme_ns.o 00:03:18.232 CC lib/nvme/nvme_pcie_common.o 00:03:18.232 CC lib/nvme/nvme_pcie.o 00:03:18.232 CC lib/nvme/nvme.o 00:03:18.232 CC lib/nvme/nvme_qpair.o 00:03:18.824 CC lib/nvme/nvme_quirks.o 00:03:18.824 CC lib/nvme/nvme_transport.o 00:03:19.083 CC lib/nvme/nvme_discovery.o 00:03:19.083 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.083 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.083 CC lib/nvme/nvme_tcp.o 00:03:19.083 CC lib/nvme/nvme_opal.o 00:03:19.083 LIB libspdk_thread.a 00:03:19.083 CC lib/nvme/nvme_io_msg.o 00:03:19.083 CC lib/nvme/nvme_poll_group.o 00:03:19.342 CC lib/nvme/nvme_zns.o 00:03:19.342 CC lib/nvme/nvme_stubs.o 00:03:19.342 CC lib/nvme/nvme_auth.o 00:03:19.600 CC lib/nvme/nvme_cuse.o 00:03:19.600 CC lib/nvme/nvme_rdma.o 00:03:19.858 CC lib/accel/accel.o 00:03:19.858 CC lib/blob/blobstore.o 00:03:19.858 CC lib/blob/request.o 00:03:19.858 CC lib/init/json_config.o 00:03:19.858 CC lib/virtio/virtio.o 00:03:20.117 CC lib/init/subsystem.o 00:03:20.117 CC lib/blob/zeroes.o 00:03:20.375 CC lib/virtio/virtio_vhost_user.o 00:03:20.375 CC lib/init/subsystem_rpc.o 00:03:20.375 CC lib/init/rpc.o 00:03:20.375 CC lib/virtio/virtio_vfio_user.o 00:03:20.375 CC lib/blob/blob_bs_dev.o 00:03:20.375 CC lib/virtio/virtio_pci.o 00:03:20.634 LIB libspdk_init.a 00:03:20.634 CC lib/accel/accel_rpc.o 00:03:20.634 CC lib/accel/accel_sw.o 00:03:20.634 CC lib/event/reactor.o 00:03:20.634 CC lib/event/app.o 00:03:20.634 CC lib/event/log_rpc.o 00:03:20.893 CC lib/event/app_rpc.o 00:03:20.893 CC lib/event/scheduler_static.o 00:03:20.893 LIB libspdk_virtio.a 00:03:20.893 LIB libspdk_nvme.a 00:03:21.151 LIB libspdk_accel.a 00:03:21.151 CC lib/bdev/bdev.o 00:03:21.151 CC lib/bdev/bdev_rpc.o 00:03:21.151 CC lib/bdev/bdev_zone.o 00:03:21.151 CC lib/bdev/part.o 00:03:21.151 CC lib/bdev/scsi_nvme.o 00:03:21.409 LIB libspdk_event.a 00:03:23.939 LIB libspdk_blob.a 00:03:23.939 CC lib/lvol/lvol.o 00:03:23.939 CC lib/blobfs/blobfs.o 00:03:23.939 CC lib/blobfs/tree.o 00:03:24.197 LIB libspdk_bdev.a 00:03:24.472 CC lib/nbd/nbd_rpc.o 00:03:24.472 CC lib/nvmf/ctrlr.o 00:03:24.472 CC lib/nvmf/ctrlr_discovery.o 00:03:24.472 CC lib/nvmf/subsystem.o 00:03:24.472 CC lib/ftl/ftl_core.o 00:03:24.472 CC lib/nvmf/ctrlr_bdev.o 00:03:24.472 CC lib/nbd/nbd.o 00:03:24.472 CC lib/scsi/dev.o 00:03:24.740 CC lib/scsi/lun.o 00:03:24.740 LIB libspdk_blobfs.a 00:03:24.998 CC lib/scsi/port.o 00:03:24.998 CC lib/nvmf/nvmf.o 00:03:24.998 LIB libspdk_lvol.a 00:03:24.998 CC lib/ftl/ftl_init.o 00:03:24.998 CC lib/ftl/ftl_layout.o 00:03:24.998 CC lib/ftl/ftl_debug.o 00:03:24.998 LIB libspdk_nbd.a 00:03:25.258 CC lib/ftl/ftl_io.o 00:03:25.258 CC lib/nvmf/nvmf_rpc.o 00:03:25.258 CC lib/scsi/scsi.o 00:03:25.258 CC lib/scsi/scsi_bdev.o 00:03:25.258 CC lib/scsi/scsi_pr.o 00:03:25.258 CC lib/nvmf/transport.o 00:03:25.258 CC lib/nvmf/tcp.o 00:03:25.519 CC lib/ftl/ftl_sb.o 00:03:25.519 CC lib/ftl/ftl_l2p.o 00:03:25.519 CC lib/ftl/ftl_l2p_flat.o 00:03:25.778 CC lib/nvmf/stubs.o 00:03:25.778 CC lib/ftl/ftl_nv_cache.o 00:03:25.778 CC lib/scsi/scsi_rpc.o 00:03:25.778 CC lib/ftl/ftl_band.o 00:03:26.040 CC lib/ftl/ftl_band_ops.o 00:03:26.040 CC lib/scsi/task.o 00:03:26.040 CC lib/nvmf/mdns_server.o 00:03:26.040 CC lib/nvmf/rdma.o 00:03:26.040 CC lib/nvmf/auth.o 00:03:26.336 CC lib/ftl/ftl_writer.o 00:03:26.336 LIB libspdk_scsi.a 00:03:26.336 CC lib/ftl/ftl_rq.o 00:03:26.336 CC lib/ftl/ftl_reloc.o 00:03:26.336 CC lib/ftl/ftl_l2p_cache.o 00:03:26.594 CC lib/ftl/ftl_p2l.o 00:03:26.594 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.594 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.852 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.110 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.110 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.110 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.110 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.110 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.110 CC lib/ftl/utils/ftl_conf.o 00:03:27.369 CC lib/iscsi/conn.o 00:03:27.369 CC lib/ftl/utils/ftl_md.o 00:03:27.369 CC lib/iscsi/init_grp.o 00:03:27.369 CC lib/vhost/vhost.o 00:03:27.369 CC lib/ftl/utils/ftl_mempool.o 00:03:27.369 CC lib/iscsi/iscsi.o 00:03:27.369 CC lib/iscsi/md5.o 00:03:27.369 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.627 CC lib/iscsi/param.o 00:03:27.627 CC lib/iscsi/portal_grp.o 00:03:27.627 CC lib/iscsi/tgt_node.o 00:03:27.627 CC lib/vhost/vhost_rpc.o 00:03:27.627 CC lib/vhost/vhost_scsi.o 00:03:27.885 CC lib/ftl/utils/ftl_property.o 00:03:27.885 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.885 CC lib/iscsi/iscsi_subsystem.o 00:03:27.885 CC lib/iscsi/iscsi_rpc.o 00:03:27.885 CC lib/vhost/vhost_blk.o 00:03:28.144 CC lib/vhost/rte_vhost_user.o 00:03:28.144 CC lib/iscsi/task.o 00:03:28.144 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.144 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.402 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.402 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.403 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.403 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.403 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.403 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.403 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.661 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.661 CC lib/ftl/base/ftl_base_dev.o 00:03:28.661 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.661 CC lib/ftl/ftl_trace.o 00:03:28.661 LIB libspdk_nvmf.a 00:03:28.920 LIB libspdk_ftl.a 00:03:28.920 LIB libspdk_vhost.a 00:03:28.920 LIB libspdk_iscsi.a 00:03:29.488 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.488 CC module/accel/ioat/accel_ioat.o 00:03:29.488 CC module/accel/error/accel_error.o 00:03:29.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.488 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.488 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.488 CC module/sock/posix/posix.o 00:03:29.488 CC module/keyring/file/keyring.o 00:03:29.488 CC module/accel/dsa/accel_dsa.o 00:03:29.488 CC module/blob/bdev/blob_bdev.o 00:03:29.488 LIB libspdk_env_dpdk_rpc.a 00:03:29.488 CC module/accel/error/accel_error_rpc.o 00:03:29.488 LIB libspdk_scheduler_gscheduler.a 00:03:29.488 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.488 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.746 CC module/keyring/file/keyring_rpc.o 00:03:29.746 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.746 LIB libspdk_scheduler_dynamic.a 00:03:29.746 LIB libspdk_accel_error.a 00:03:29.746 LIB libspdk_accel_ioat.a 00:03:29.746 CC module/keyring/linux/keyring.o 00:03:29.746 CC module/keyring/linux/keyring_rpc.o 00:03:29.746 LIB libspdk_keyring_file.a 00:03:29.746 LIB libspdk_accel_dsa.a 00:03:29.746 CC module/accel/iaa/accel_iaa.o 00:03:29.746 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.746 LIB libspdk_blob_bdev.a 00:03:30.005 LIB libspdk_keyring_linux.a 00:03:30.005 LIB libspdk_accel_iaa.a 00:03:30.005 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.005 CC module/bdev/gpt/gpt.o 00:03:30.005 CC module/bdev/error/vbdev_error.o 00:03:30.005 CC module/bdev/nvme/bdev_nvme.o 00:03:30.005 CC module/bdev/null/bdev_null.o 00:03:30.005 CC module/bdev/delay/vbdev_delay.o 00:03:30.005 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.005 CC module/bdev/malloc/bdev_malloc.o 00:03:30.264 CC module/bdev/null/bdev_null_rpc.o 00:03:30.264 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.264 LIB libspdk_sock_posix.a 00:03:30.264 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.264 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.264 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.264 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.264 LIB libspdk_bdev_null.a 00:03:30.523 LIB libspdk_blobfs_bdev.a 00:03:30.523 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.523 LIB libspdk_bdev_error.a 00:03:30.523 LIB libspdk_bdev_malloc.a 00:03:30.523 LIB libspdk_bdev_gpt.a 00:03:30.523 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.523 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.523 CC module/bdev/raid/bdev_raid.o 00:03:30.781 LIB libspdk_bdev_delay.a 00:03:30.781 CC module/bdev/split/vbdev_split.o 00:03:30.781 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:30.781 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.781 CC module/bdev/aio/bdev_aio.o 00:03:30.782 LIB libspdk_bdev_lvol.a 00:03:30.782 CC module/bdev/ftl/bdev_ftl.o 00:03:30.782 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.782 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.782 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.040 LIB libspdk_bdev_passthru.a 00:03:31.040 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.040 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:31.040 LIB libspdk_bdev_zone_block.a 00:03:31.040 LIB libspdk_bdev_split.a 00:03:31.040 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:31.040 LIB libspdk_bdev_ftl.a 00:03:31.040 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:31.040 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:31.040 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.040 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.040 LIB libspdk_bdev_aio.a 00:03:31.298 CC module/bdev/nvme/nvme_rpc.o 00:03:31.298 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.298 LIB libspdk_bdev_iscsi.a 00:03:31.298 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.298 CC module/bdev/nvme/vbdev_opal.o 00:03:31.298 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:31.298 CC module/bdev/raid/raid0.o 00:03:31.298 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.556 CC module/bdev/raid/raid1.o 00:03:31.556 CC module/bdev/raid/concat.o 00:03:31.556 CC module/bdev/raid/raid5f.o 00:03:31.556 LIB libspdk_bdev_virtio.a 00:03:32.123 LIB libspdk_bdev_raid.a 00:03:32.690 LIB libspdk_bdev_nvme.a 00:03:33.256 CC module/event/subsystems/keyring/keyring.o 00:03:33.256 CC module/event/subsystems/sock/sock.o 00:03:33.256 CC module/event/subsystems/vmd/vmd.o 00:03:33.256 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:33.256 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:33.256 CC module/event/subsystems/iobuf/iobuf.o 00:03:33.256 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:33.256 CC module/event/subsystems/scheduler/scheduler.o 00:03:33.256 LIB libspdk_event_keyring.a 00:03:33.256 LIB libspdk_event_vhost_blk.a 00:03:33.256 LIB libspdk_event_vmd.a 00:03:33.256 LIB libspdk_event_scheduler.a 00:03:33.513 LIB libspdk_event_iobuf.a 00:03:33.513 LIB libspdk_event_sock.a 00:03:33.513 CC module/event/subsystems/accel/accel.o 00:03:33.772 LIB libspdk_event_accel.a 00:03:34.029 CC module/event/subsystems/bdev/bdev.o 00:03:34.287 LIB libspdk_event_bdev.a 00:03:34.545 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.545 CC module/event/subsystems/nbd/nbd.o 00:03:34.545 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.545 CC module/event/subsystems/scsi/scsi.o 00:03:34.545 LIB libspdk_event_nbd.a 00:03:34.545 LIB libspdk_event_scsi.a 00:03:34.804 LIB libspdk_event_nvmf.a 00:03:34.804 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.804 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.062 LIB libspdk_event_vhost_scsi.a 00:03:35.062 LIB libspdk_event_iscsi.a 00:03:35.321 CC app/trace_record/trace_record.o 00:03:35.321 CXX app/trace/trace.o 00:03:35.321 CC app/spdk_lspci/spdk_lspci.o 00:03:35.321 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:35.321 CC examples/util/zipf/zipf.o 00:03:35.321 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.321 CC app/nvmf_tgt/nvmf_main.o 00:03:35.321 CC examples/ioat/perf/perf.o 00:03:35.579 CC app/spdk_tgt/spdk_tgt.o 00:03:35.579 CC test/thread/poller_perf/poller_perf.o 00:03:35.579 LINK spdk_lspci 00:03:35.579 LINK interrupt_tgt 00:03:35.579 LINK iscsi_tgt 00:03:35.579 LINK zipf 00:03:35.579 LINK poller_perf 00:03:35.579 LINK nvmf_tgt 00:03:35.579 LINK spdk_trace_record 00:03:35.836 LINK spdk_tgt 00:03:35.836 LINK ioat_perf 00:03:35.836 LINK spdk_trace 00:03:36.403 CC examples/ioat/verify/verify.o 00:03:36.403 CC app/spdk_nvme_perf/perf.o 00:03:36.403 CC app/spdk_nvme_identify/identify.o 00:03:36.403 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.403 LINK verify 00:03:36.661 CC test/thread/lock/spdk_lock.o 00:03:36.661 LINK spdk_nvme_discover 00:03:36.919 CC test/dma/test_dma/test_dma.o 00:03:37.177 LINK spdk_nvme_perf 00:03:37.177 LINK spdk_nvme_identify 00:03:37.177 CC test/app/bdev_svc/bdev_svc.o 00:03:37.435 LINK test_dma 00:03:37.435 LINK bdev_svc 00:03:37.693 CC examples/thread/thread/thread_ex.o 00:03:37.951 LINK thread 00:03:37.951 CC app/spdk_top/spdk_top.o 00:03:38.517 TEST_HEADER include/spdk/accel.h 00:03:38.517 TEST_HEADER include/spdk/accel_module.h 00:03:38.517 TEST_HEADER include/spdk/assert.h 00:03:38.517 TEST_HEADER include/spdk/barrier.h 00:03:38.517 TEST_HEADER include/spdk/base64.h 00:03:38.517 TEST_HEADER include/spdk/bdev.h 00:03:38.517 TEST_HEADER include/spdk/bdev_module.h 00:03:38.517 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.517 TEST_HEADER include/spdk/bit_array.h 00:03:38.517 TEST_HEADER include/spdk/bit_pool.h 00:03:38.517 LINK spdk_lock 00:03:38.517 TEST_HEADER include/spdk/blob.h 00:03:38.517 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.517 TEST_HEADER include/spdk/blobfs.h 00:03:38.517 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.517 TEST_HEADER include/spdk/conf.h 00:03:38.517 TEST_HEADER include/spdk/config.h 00:03:38.517 TEST_HEADER include/spdk/cpuset.h 00:03:38.517 TEST_HEADER include/spdk/crc16.h 00:03:38.517 TEST_HEADER include/spdk/crc32.h 00:03:38.517 TEST_HEADER include/spdk/crc64.h 00:03:38.517 TEST_HEADER include/spdk/dif.h 00:03:38.517 TEST_HEADER include/spdk/dma.h 00:03:38.517 TEST_HEADER include/spdk/endian.h 00:03:38.517 TEST_HEADER include/spdk/env.h 00:03:38.517 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.517 TEST_HEADER include/spdk/event.h 00:03:38.517 TEST_HEADER include/spdk/fd.h 00:03:38.517 TEST_HEADER include/spdk/fd_group.h 00:03:38.517 TEST_HEADER include/spdk/file.h 00:03:38.517 TEST_HEADER include/spdk/ftl.h 00:03:38.517 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.517 TEST_HEADER include/spdk/hexlify.h 00:03:38.517 TEST_HEADER include/spdk/histogram_data.h 00:03:38.517 TEST_HEADER include/spdk/idxd.h 00:03:38.517 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.517 TEST_HEADER include/spdk/init.h 00:03:38.517 TEST_HEADER include/spdk/ioat.h 00:03:38.517 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.517 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.517 TEST_HEADER include/spdk/json.h 00:03:38.517 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.517 TEST_HEADER include/spdk/keyring.h 00:03:38.517 TEST_HEADER include/spdk/keyring_module.h 00:03:38.517 TEST_HEADER include/spdk/likely.h 00:03:38.517 TEST_HEADER include/spdk/log.h 00:03:38.517 TEST_HEADER include/spdk/lvol.h 00:03:38.518 TEST_HEADER include/spdk/memory.h 00:03:38.518 TEST_HEADER include/spdk/mmio.h 00:03:38.518 TEST_HEADER include/spdk/nbd.h 00:03:38.518 TEST_HEADER include/spdk/net.h 00:03:38.518 TEST_HEADER include/spdk/notify.h 00:03:38.518 TEST_HEADER include/spdk/nvme.h 00:03:38.518 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.518 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.518 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.518 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.518 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.518 TEST_HEADER include/spdk/nvmf.h 00:03:38.518 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.518 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.518 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.518 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.518 TEST_HEADER include/spdk/opal.h 00:03:38.518 TEST_HEADER include/spdk/opal_spec.h 00:03:38.518 TEST_HEADER include/spdk/pci_ids.h 00:03:38.518 TEST_HEADER include/spdk/pipe.h 00:03:38.518 TEST_HEADER include/spdk/queue.h 00:03:38.518 CC app/vhost/vhost.o 00:03:38.518 TEST_HEADER include/spdk/reduce.h 00:03:38.518 TEST_HEADER include/spdk/rpc.h 00:03:38.518 TEST_HEADER include/spdk/scheduler.h 00:03:38.518 TEST_HEADER include/spdk/scsi.h 00:03:38.518 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.518 TEST_HEADER include/spdk/sock.h 00:03:38.518 TEST_HEADER include/spdk/stdinc.h 00:03:38.518 TEST_HEADER include/spdk/string.h 00:03:38.518 TEST_HEADER include/spdk/thread.h 00:03:38.518 TEST_HEADER include/spdk/trace.h 00:03:38.518 TEST_HEADER include/spdk/trace_parser.h 00:03:38.518 TEST_HEADER include/spdk/tree.h 00:03:38.518 TEST_HEADER include/spdk/ublk.h 00:03:38.518 TEST_HEADER include/spdk/util.h 00:03:38.518 TEST_HEADER include/spdk/uuid.h 00:03:38.518 TEST_HEADER include/spdk/version.h 00:03:38.518 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.518 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.518 TEST_HEADER include/spdk/vhost.h 00:03:38.518 TEST_HEADER include/spdk/vmd.h 00:03:38.518 TEST_HEADER include/spdk/xor.h 00:03:38.518 TEST_HEADER include/spdk/zipf.h 00:03:38.518 CXX test/cpp_headers/accel.o 00:03:38.776 LINK vhost 00:03:38.776 CXX test/cpp_headers/accel_module.o 00:03:38.776 CXX test/cpp_headers/assert.o 00:03:39.034 CXX test/cpp_headers/barrier.o 00:03:39.034 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.034 LINK spdk_top 00:03:39.034 CXX test/cpp_headers/base64.o 00:03:39.292 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.292 CXX test/cpp_headers/bdev.o 00:03:39.292 CC test/app/histogram_perf/histogram_perf.o 00:03:39.550 CXX test/cpp_headers/bdev_module.o 00:03:39.550 LINK histogram_perf 00:03:39.550 LINK nvme_fuzz 00:03:39.808 CXX test/cpp_headers/bdev_zone.o 00:03:39.808 CXX test/cpp_headers/bit_array.o 00:03:39.808 CC test/app/jsoncat/jsoncat.o 00:03:39.808 CXX test/cpp_headers/bit_pool.o 00:03:40.066 LINK jsoncat 00:03:40.066 CC test/app/stub/stub.o 00:03:40.066 CXX test/cpp_headers/blob.o 00:03:40.066 LINK stub 00:03:40.324 CXX test/cpp_headers/blob_bdev.o 00:03:40.324 CXX test/cpp_headers/blobfs.o 00:03:40.582 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.582 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.582 CC app/spdk_dd/spdk_dd.o 00:03:40.895 CC app/fio/nvme/fio_plugin.o 00:03:40.895 CXX test/cpp_headers/conf.o 00:03:40.895 CC test/event/event_perf/event_perf.o 00:03:40.895 CC test/event/reactor/reactor.o 00:03:40.895 LINK event_perf 00:03:40.895 LINK mem_callbacks 00:03:41.169 CXX test/cpp_headers/config.o 00:03:41.169 LINK reactor 00:03:41.169 CXX test/cpp_headers/cpuset.o 00:03:41.169 LINK spdk_dd 00:03:41.169 LINK iscsi_fuzz 00:03:41.169 CXX test/cpp_headers/crc16.o 00:03:41.427 CC examples/sock/hello_world/hello_sock.o 00:03:41.427 CXX test/cpp_headers/crc32.o 00:03:41.427 CC test/env/vtophys/vtophys.o 00:03:41.428 LINK spdk_nvme 00:03:41.428 CC test/event/reactor_perf/reactor_perf.o 00:03:41.686 CXX test/cpp_headers/crc64.o 00:03:41.686 LINK hello_sock 00:03:41.686 LINK vtophys 00:03:41.686 LINK reactor_perf 00:03:41.686 CXX test/cpp_headers/dif.o 00:03:41.944 CC test/nvme/aer/aer.o 00:03:41.944 CC test/rpc_client/rpc_client_test.o 00:03:41.944 CXX test/cpp_headers/dma.o 00:03:42.202 LINK rpc_client_test 00:03:42.202 CXX test/cpp_headers/endian.o 00:03:42.202 LINK aer 00:03:42.202 CC test/event/app_repeat/app_repeat.o 00:03:42.202 CXX test/cpp_headers/env.o 00:03:42.460 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.460 LINK app_repeat 00:03:42.460 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.460 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.460 CXX test/cpp_headers/env_dpdk.o 00:03:42.460 CC test/env/memory/memory_ut.o 00:03:42.719 CC examples/vmd/lsvmd/lsvmd.o 00:03:42.719 LINK env_dpdk_post_init 00:03:42.719 CXX test/cpp_headers/event.o 00:03:42.719 CC examples/vmd/led/led.o 00:03:42.719 LINK lsvmd 00:03:42.719 CC app/fio/bdev/fio_plugin.o 00:03:42.977 CXX test/cpp_headers/fd.o 00:03:42.977 LINK led 00:03:42.977 LINK vhost_fuzz 00:03:42.977 CXX test/cpp_headers/fd_group.o 00:03:43.235 CXX test/cpp_headers/file.o 00:03:43.493 CC test/nvme/reset/reset.o 00:03:43.493 LINK spdk_bdev 00:03:43.493 CXX test/cpp_headers/ftl.o 00:03:43.751 LINK reset 00:03:43.751 CC test/event/scheduler/scheduler.o 00:03:43.751 LINK memory_ut 00:03:43.751 CXX test/cpp_headers/gpt_spec.o 00:03:43.751 CXX test/cpp_headers/hexlify.o 00:03:43.751 CXX test/cpp_headers/histogram_data.o 00:03:44.010 CXX test/cpp_headers/idxd.o 00:03:44.010 CC test/env/pci/pci_ut.o 00:03:44.268 CC examples/idxd/perf/perf.o 00:03:44.268 CXX test/cpp_headers/idxd_spec.o 00:03:44.268 CXX test/cpp_headers/init.o 00:03:44.268 LINK scheduler 00:03:44.268 CC test/nvme/sgl/sgl.o 00:03:44.546 CXX test/cpp_headers/ioat.o 00:03:44.546 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:44.546 CC test/unit/lib/log/log.c/log_ut.o 00:03:44.546 LINK pci_ut 00:03:44.546 LINK idxd_perf 00:03:44.811 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:44.811 CXX test/cpp_headers/ioat_spec.o 00:03:44.811 LINK histogram_ut 00:03:44.811 LINK sgl 00:03:45.070 LINK log_ut 00:03:45.070 CC test/nvme/e2edp/nvme_dp.o 00:03:45.070 CXX test/cpp_headers/iscsi_spec.o 00:03:45.070 CXX test/cpp_headers/json.o 00:03:45.070 CC test/nvme/overhead/overhead.o 00:03:45.070 CXX test/cpp_headers/jsonrpc.o 00:03:45.329 CXX test/cpp_headers/keyring.o 00:03:45.329 LINK nvme_dp 00:03:45.329 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:45.329 LINK overhead 00:03:45.329 LINK common_ut 00:03:45.329 CC examples/accel/perf/accel_perf.o 00:03:45.329 CC test/accel/dif/dif.o 00:03:45.587 CXX test/cpp_headers/keyring_module.o 00:03:45.588 CXX test/cpp_headers/likely.o 00:03:45.588 LINK base64_ut 00:03:45.846 CXX test/cpp_headers/log.o 00:03:45.846 CC examples/blob/hello_world/hello_blob.o 00:03:45.846 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:45.846 LINK dif 00:03:46.105 CXX test/cpp_headers/lvol.o 00:03:46.105 LINK accel_perf 00:03:46.105 CXX test/cpp_headers/memory.o 00:03:46.105 LINK hello_blob 00:03:46.364 CXX test/cpp_headers/mmio.o 00:03:46.364 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:46.364 CXX test/cpp_headers/nbd.o 00:03:46.364 CXX test/cpp_headers/net.o 00:03:46.622 CXX test/cpp_headers/notify.o 00:03:46.622 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:46.622 LINK bit_array_ut 00:03:46.622 CC test/nvme/err_injection/err_injection.o 00:03:46.622 CXX test/cpp_headers/nvme.o 00:03:46.880 CC examples/nvme/hello_world/hello_world.o 00:03:46.880 LINK err_injection 00:03:46.880 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:46.880 CXX test/cpp_headers/nvme_intel.o 00:03:47.138 CXX test/cpp_headers/nvme_ocssd.o 00:03:47.138 LINK hello_world 00:03:47.138 LINK cpuset_ut 00:03:47.138 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.395 LINK ioat_ut 00:03:47.395 LINK dma_ut 00:03:47.395 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.395 CXX test/cpp_headers/nvme_spec.o 00:03:47.652 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:47.652 CXX test/cpp_headers/nvme_zns.o 00:03:47.652 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.652 CXX test/cpp_headers/nvmf.o 00:03:47.652 LINK hello_bdev 00:03:47.652 CC examples/nvme/reconnect/reconnect.o 00:03:47.652 LINK crc16_ut 00:03:47.652 CXX test/cpp_headers/nvmf_cmd.o 00:03:47.919 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:47.919 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:47.919 LINK crc32_ieee_ut 00:03:48.193 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:48.193 LINK reconnect 00:03:48.193 CXX test/cpp_headers/nvmf_spec.o 00:03:48.193 CC test/nvme/startup/startup.o 00:03:48.193 CXX test/cpp_headers/nvmf_transport.o 00:03:48.193 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:48.193 LINK startup 00:03:48.451 CXX test/cpp_headers/opal.o 00:03:48.451 LINK crc32c_ut 00:03:48.451 LINK bdevperf 00:03:48.451 CXX test/cpp_headers/opal_spec.o 00:03:48.709 CC examples/nvme/arbitration/arbitration.o 00:03:48.709 LINK nvme_manage 00:03:48.709 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:48.709 CXX test/cpp_headers/pci_ids.o 00:03:48.967 LINK crc64_ut 00:03:48.967 CXX test/cpp_headers/pipe.o 00:03:48.967 LINK arbitration 00:03:49.225 CXX test/cpp_headers/queue.o 00:03:49.225 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:49.225 CXX test/cpp_headers/reduce.o 00:03:49.483 CXX test/cpp_headers/rpc.o 00:03:49.483 CC examples/nvme/hotplug/hotplug.o 00:03:49.483 CC examples/blob/cli/blobcli.o 00:03:49.483 CXX test/cpp_headers/scheduler.o 00:03:49.740 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.740 LINK hotplug 00:03:49.740 CC test/nvme/reserve/reserve.o 00:03:49.740 CXX test/cpp_headers/scsi.o 00:03:49.998 LINK cmb_copy 00:03:49.998 CXX test/cpp_headers/scsi_spec.o 00:03:50.257 CXX test/cpp_headers/sock.o 00:03:50.515 LINK reserve 00:03:50.515 CXX test/cpp_headers/stdinc.o 00:03:50.515 CC examples/nvme/abort/abort.o 00:03:50.515 LINK blobcli 00:03:50.515 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.773 LINK dif_ut 00:03:50.773 CXX test/cpp_headers/string.o 00:03:50.773 LINK pmr_persistence 00:03:51.031 CXX test/cpp_headers/thread.o 00:03:51.031 CXX test/cpp_headers/trace.o 00:03:51.031 LINK abort 00:03:51.031 CC test/unit/lib/util/file.c/file_ut.o 00:03:51.031 CC test/blobfs/mkfs/mkfs.o 00:03:51.031 CXX test/cpp_headers/trace_parser.o 00:03:51.031 CXX test/cpp_headers/tree.o 00:03:51.289 CXX test/cpp_headers/ublk.o 00:03:51.289 LINK file_ut 00:03:51.289 CXX test/cpp_headers/util.o 00:03:51.289 LINK mkfs 00:03:51.289 CXX test/cpp_headers/uuid.o 00:03:51.547 CC test/lvol/esnap/esnap.o 00:03:51.547 CXX test/cpp_headers/version.o 00:03:51.547 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.547 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.547 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:51.805 CXX test/cpp_headers/vhost.o 00:03:51.805 CXX test/cpp_headers/vmd.o 00:03:51.805 CC test/nvme/simple_copy/simple_copy.o 00:03:51.805 CXX test/cpp_headers/xor.o 00:03:51.805 CC test/unit/lib/util/math.c/math_ut.o 00:03:51.805 LINK iov_ut 00:03:51.805 CC test/nvme/connect_stress/connect_stress.o 00:03:52.063 CXX test/cpp_headers/zipf.o 00:03:52.063 LINK math_ut 00:03:52.063 LINK simple_copy 00:03:52.063 CC test/nvme/boot_partition/boot_partition.o 00:03:52.063 LINK connect_stress 00:03:52.321 CC test/unit/lib/util/net.c/net_ut.o 00:03:52.321 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:52.321 LINK boot_partition 00:03:52.321 CC test/unit/lib/util/string.c/string_ut.o 00:03:52.321 CC test/bdev/bdevio/bdevio.o 00:03:52.321 LINK net_ut 00:03:52.579 LINK string_ut 00:03:52.837 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:52.837 LINK bdevio 00:03:52.837 LINK pipe_ut 00:03:53.096 CC examples/nvmf/nvmf/nvmf.o 00:03:53.096 CC test/nvme/compliance/nvme_compliance.o 00:03:53.354 LINK xor_ut 00:03:53.354 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.354 LINK nvmf 00:03:53.354 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.614 CC test/nvme/fdp/fdp.o 00:03:53.614 LINK fused_ordering 00:03:53.614 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:53.614 LINK doorbell_aers 00:03:53.614 LINK nvme_compliance 00:03:53.614 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:53.872 LINK fdp 00:03:54.442 LINK json_util_ut 00:03:54.442 CC test/nvme/cuse/cuse.o 00:03:54.442 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:54.749 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:54.749 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:55.010 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:55.268 LINK pci_event_ut 00:03:55.269 LINK json_write_ut 00:03:55.526 LINK idxd_user_ut 00:03:55.785 LINK cuse 00:03:56.043 LINK idxd_ut 00:03:56.043 LINK json_parse_ut 00:03:56.610 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:56.868 LINK jsonrpc_server_ut 00:03:57.440 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:57.697 LINK esnap 00:03:58.633 LINK rpc_ut 00:03:58.892 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:58.892 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:58.892 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:58.892 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:58.892 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:58.892 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:59.459 LINK keyring_ut 00:03:59.718 LINK notify_ut 00:03:59.976 LINK iobuf_ut 00:04:00.234 LINK posix_ut 00:04:00.828 LINK sock_ut 00:04:01.393 LINK thread_ut 00:04:01.393 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:01.393 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:01.651 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:02.217 LINK nvme_ns_ut 00:04:02.475 LINK nvme_poll_group_ut 00:04:02.475 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:02.733 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:02.733 LINK nvme_ctrlr_cmd_ut 00:04:02.733 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:02.733 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:02.992 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:02.992 LINK nvme_qpair_ut 00:04:02.992 LINK nvme_ut 00:04:02.992 LINK nvme_ns_ocssd_cmd_ut 00:04:02.992 LINK nvme_quirks_ut 00:04:03.250 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:03.250 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:03.250 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:03.508 LINK nvme_ns_cmd_ut 00:04:03.508 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:03.508 LINK nvme_pcie_ut 00:04:03.766 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:03.766 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:04.024 LINK nvme_io_msg_ut 00:04:04.024 LINK nvme_transport_ut 00:04:04.283 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:04.283 LINK blob_bdev_ut 00:04:04.283 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:04.539 LINK nvme_fabric_ut 00:04:04.539 LINK nvme_opal_ut 00:04:04.796 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:04.796 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:04.796 LINK nvme_ctrlr_ut 00:04:04.796 LINK nvme_pcie_common_ut 00:04:05.362 LINK rpc_ut 00:04:05.620 LINK subsystem_ut 00:04:05.877 CC test/unit/lib/event/app.c/app_ut.o 00:04:05.877 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:05.877 LINK nvme_tcp_ut 00:04:06.135 LINK nvme_cuse_ut 00:04:06.135 LINK accel_ut 00:04:06.394 LINK nvme_rdma_ut 00:04:06.651 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:06.651 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:06.651 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:06.651 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:06.651 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:06.651 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:06.909 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:06.909 LINK app_ut 00:04:06.909 LINK scsi_nvme_ut 00:04:07.167 LINK reactor_ut 00:04:07.167 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:07.167 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:07.424 LINK gpt_ut 00:04:07.424 LINK bdev_zone_ut 00:04:07.424 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:07.682 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:07.682 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:08.248 LINK vbdev_zone_block_ut 00:04:08.248 LINK vbdev_lvol_ut 00:04:08.506 LINK bdev_raid_sb_ut 00:04:08.506 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:08.764 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:04:08.764 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:08.764 LINK concat_ut 00:04:09.330 LINK bdev_raid_ut 00:04:09.588 LINK raid1_ut 00:04:09.846 LINK raid0_ut 00:04:10.413 LINK raid5f_ut 00:04:10.980 LINK part_ut 00:04:11.238 LINK bdev_ut 00:04:12.615 LINK blob_ut 00:04:12.873 LINK bdev_ut 00:04:12.873 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:12.873 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:12.873 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:12.873 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:12.873 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:12.873 LINK bdev_nvme_ut 00:04:13.132 LINK tree_ut 00:04:13.132 LINK blobfs_bdev_ut 00:04:13.402 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:13.402 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:13.402 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:13.402 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:13.402 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:13.402 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:13.402 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:13.983 LINK ftl_l2p_ut 00:04:13.983 LINK dev_ut 00:04:14.241 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:14.241 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:14.500 LINK blobfs_sync_ut 00:04:14.500 LINK blobfs_async_ut 00:04:14.759 LINK ctrlr_bdev_ut 00:04:14.759 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:15.017 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:15.017 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:15.275 LINK lun_ut 00:04:15.275 LINK lvol_ut 00:04:15.275 LINK scsi_ut 00:04:15.534 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:04:15.534 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:15.534 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:15.793 LINK ctrlr_discovery_ut 00:04:16.051 LINK ftl_band_ut 00:04:16.051 LINK subsystem_ut 00:04:16.051 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:16.051 LINK scsi_pr_ut 00:04:16.310 LINK scsi_bdev_ut 00:04:16.310 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:16.310 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:04:16.568 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:16.568 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:16.568 LINK nvmf_ut 00:04:16.568 LINK ftl_bitmap_ut 00:04:16.826 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:17.084 LINK ctrlr_ut 00:04:17.084 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:17.084 LINK ftl_mempool_ut 00:04:17.342 LINK ftl_io_ut 00:04:17.342 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:17.342 LINK auth_ut 00:04:17.342 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:17.600 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:17.600 LINK ftl_p2l_ut 00:04:17.600 LINK ftl_mngt_ut 00:04:17.858 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:17.858 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:18.117 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:18.117 LINK init_grp_ut 00:04:18.375 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:18.375 LINK tcp_ut 00:04:18.634 LINK param_ut 00:04:18.634 LINK ftl_sb_ut 00:04:18.903 LINK ftl_layout_upgrade_ut 00:04:18.903 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:19.175 LINK conn_ut 00:04:19.434 LINK rdma_ut 00:04:19.692 LINK portal_grp_ut 00:04:19.692 LINK transport_ut 00:04:20.259 LINK tgt_node_ut 00:04:20.518 LINK vhost_ut 00:04:20.781 LINK iscsi_ut 00:04:21.349 00:04:21.349 real 2m8.430s 00:04:21.349 user 10m56.092s 00:04:21.349 sys 2m10.017s 00:04:21.349 13:47:10 unittest_build -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:21.349 13:47:10 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:04:21.349 ************************************ 00:04:21.349 END TEST unittest_build 00:04:21.349 ************************************ 00:04:21.349 13:47:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:21.349 13:47:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:21.349 13:47:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:21.349 13:47:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.349 13:47:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:21.349 13:47:10 -- pm/common@44 -- $ pid=2193 00:04:21.349 13:47:10 -- pm/common@50 -- $ kill -TERM 2193 00:04:21.349 13:47:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.349 13:47:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:21.349 13:47:10 -- pm/common@44 -- $ pid=2194 00:04:21.349 13:47:10 -- pm/common@50 -- $ kill -TERM 2194 00:04:21.349 13:47:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.349 13:47:10 -- nvmf/common.sh@7 -- # uname -s 00:04:21.349 13:47:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.349 13:47:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.350 13:47:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.350 13:47:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.350 13:47:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.350 13:47:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.350 13:47:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.350 13:47:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.350 13:47:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.350 13:47:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.350 13:47:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e7c8ea6-6715-433d-bf9c-cdd913fd3add 00:04:21.350 13:47:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8e7c8ea6-6715-433d-bf9c-cdd913fd3add 00:04:21.350 13:47:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.350 13:47:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.350 13:47:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.350 13:47:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.350 13:47:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.350 13:47:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.350 13:47:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.350 13:47:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.350 13:47:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.350 13:47:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.350 13:47:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.350 13:47:10 -- paths/export.sh@5 -- # export PATH 00:04:21.350 13:47:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.350 13:47:10 -- nvmf/common.sh@47 -- # : 0 00:04:21.350 13:47:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:21.350 13:47:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:21.350 13:47:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.350 13:47:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.350 13:47:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.350 13:47:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:21.350 13:47:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:21.350 13:47:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:21.350 13:47:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.350 13:47:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.350 13:47:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.350 13:47:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:21.350 13:47:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.350 13:47:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.350 13:47:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.350 13:47:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.350 13:47:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.350 13:47:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:21.350 13:47:10 -- spdk/autotest.sh@48 -- # udevadm_pid=100204 00:04:21.350 13:47:10 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:21.350 13:47:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:21.350 13:47:10 -- pm/common@17 -- # local monitor 00:04:21.350 13:47:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.350 13:47:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.350 13:47:10 -- pm/common@25 -- # sleep 1 00:04:21.350 13:47:10 -- pm/common@21 -- # date +%s 00:04:21.350 13:47:10 -- pm/common@21 -- # date +%s 00:04:21.350 13:47:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915230 00:04:21.350 13:47:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721915230 00:04:21.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915230_collect-vmstat.pm.log 00:04:21.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721915230_collect-cpu-load.pm.log 00:04:22.315 13:47:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:22.315 13:47:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:22.315 13:47:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.315 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:04:22.315 13:47:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:22.315 13:47:11 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:22.315 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:04:22.574 13:47:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:22.574 13:47:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:22.574 13:47:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:22.574 13:47:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:22.574 13:47:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:22.574 13:47:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:22.574 13:47:11 -- common/autotest_common.sh@1455 -- # uname 00:04:22.574 13:47:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:22.574 13:47:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:22.574 13:47:11 -- common/autotest_common.sh@1475 -- # uname 00:04:22.574 13:47:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:22.574 13:47:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:22.574 13:47:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:22.574 13:47:11 -- spdk/autotest.sh@72 -- # hash lcov 00:04:22.574 13:47:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:22.574 13:47:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:22.574 --rc lcov_branch_coverage=1 00:04:22.574 --rc lcov_function_coverage=1 00:04:22.574 --rc genhtml_branch_coverage=1 00:04:22.574 --rc genhtml_function_coverage=1 00:04:22.574 --rc genhtml_legend=1 00:04:22.574 --rc geninfo_all_blocks=1 00:04:22.574 ' 00:04:22.574 13:47:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:22.574 --rc lcov_branch_coverage=1 00:04:22.574 --rc lcov_function_coverage=1 00:04:22.574 --rc genhtml_branch_coverage=1 00:04:22.574 --rc genhtml_function_coverage=1 00:04:22.574 --rc genhtml_legend=1 00:04:22.574 --rc geninfo_all_blocks=1 00:04:22.574 ' 00:04:22.574 13:47:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:22.574 --rc lcov_branch_coverage=1 00:04:22.574 --rc lcov_function_coverage=1 00:04:22.574 --rc genhtml_branch_coverage=1 00:04:22.574 --rc genhtml_function_coverage=1 00:04:22.574 --rc genhtml_legend=1 00:04:22.574 --rc geninfo_all_blocks=1 00:04:22.574 --no-external' 00:04:22.574 13:47:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:22.574 --rc lcov_branch_coverage=1 00:04:22.574 --rc lcov_function_coverage=1 00:04:22.574 --rc genhtml_branch_coverage=1 00:04:22.574 --rc genhtml_function_coverage=1 00:04:22.574 --rc genhtml_legend=1 00:04:22.574 --rc geninfo_all_blocks=1 00:04:22.574 --no-external' 00:04:22.574 13:47:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:22.574 lcov: LCOV version 1.15 00:04:22.574 13:47:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:29.141 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:29.142 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:07.845 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:07.845 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:07.846 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:07.846 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:09.220 13:47:57 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:09.220 13:47:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.220 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:09.220 13:47:57 -- spdk/autotest.sh@91 -- # rm -f 00:05:09.221 13:47:57 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:09.221 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:09.479 13:47:58 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:09.479 13:47:58 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:09.479 13:47:58 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:09.479 13:47:58 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:09.479 13:47:58 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:09.479 13:47:58 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:09.479 13:47:58 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:09.479 13:47:58 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.479 13:47:58 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:09.479 13:47:58 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:09.479 13:47:58 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.479 13:47:58 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:09.479 13:47:58 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:09.479 13:47:58 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:09.479 13:47:58 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:09.479 No valid GPT data, bailing 00:05:09.479 13:47:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.479 13:47:58 -- scripts/common.sh@391 -- # pt= 00:05:09.479 13:47:58 -- scripts/common.sh@392 -- # return 1 00:05:09.479 13:47:58 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.479 1+0 records in 00:05:09.479 1+0 records out 00:05:09.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449957 s, 233 MB/s 00:05:09.479 13:47:58 -- spdk/autotest.sh@118 -- # sync 00:05:09.479 13:47:58 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.479 13:47:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.479 13:47:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:10.856 13:47:59 -- spdk/autotest.sh@124 -- # uname -s 00:05:10.856 13:47:59 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:10.856 13:47:59 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.856 13:47:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.856 13:47:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.856 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 START TEST setup.sh 00:05:10.856 ************************************ 00:05:10.856 13:47:59 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.856 * Looking for test storage... 00:05:10.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.856 13:47:59 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:10.856 13:47:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:10.856 13:47:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.856 13:47:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.856 13:47:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.856 13:47:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 START TEST acl 00:05:10.856 ************************************ 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.856 * Looking for test storage... 00:05:10.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.856 13:47:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:10.856 13:47:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:10.856 13:47:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.856 13:47:59 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.422 13:48:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:11.422 13:48:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:11.422 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.422 13:48:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:11.422 13:48:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.422 13:48:00 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.681 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:11.681 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.681 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.681 Hugepages 00:05:11.682 node hugesize free / total 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.682 00:05:11.682 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:11.682 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:11.941 13:48:00 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:11.941 13:48:00 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.941 13:48:00 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.941 13:48:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:11.941 ************************************ 00:05:11.941 START TEST denied 00:05:11.941 ************************************ 00:05:11.941 13:48:00 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:11.941 13:48:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:11.941 13:48:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:11.941 13:48:00 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:11.941 13:48:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.941 13:48:00 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.316 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.316 13:48:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.883 00:05:13.883 real 0m1.831s 00:05:13.883 user 0m0.492s 00:05:13.883 sys 0m1.391s 00:05:13.883 13:48:02 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.883 ************************************ 00:05:13.883 END TEST denied 00:05:13.883 13:48:02 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:13.883 ************************************ 00:05:13.884 13:48:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:13.884 13:48:02 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.884 13:48:02 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.884 13:48:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:13.884 ************************************ 00:05:13.884 START TEST allowed 00:05:13.884 ************************************ 00:05:13.884 13:48:02 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:13.884 13:48:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:13.884 13:48:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:13.884 13:48:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:13.884 13:48:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.884 13:48:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.262 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.262 13:48:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:15.262 13:48:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:15.262 13:48:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:15.262 13:48:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.262 13:48:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.830 ************************************ 00:05:15.830 END TEST allowed 00:05:15.830 ************************************ 00:05:15.830 00:05:15.830 real 0m1.938s 00:05:15.830 user 0m0.485s 00:05:15.830 sys 0m1.454s 00:05:15.830 13:48:04 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.830 13:48:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:15.830 00:05:15.830 real 0m4.953s 00:05:15.830 user 0m1.697s 00:05:15.830 sys 0m3.377s 00:05:15.830 13:48:04 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.830 13:48:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:15.830 ************************************ 00:05:15.830 END TEST acl 00:05:15.830 ************************************ 00:05:15.830 13:48:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:15.830 13:48:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.830 13:48:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.830 13:48:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:15.830 ************************************ 00:05:15.830 START TEST hugepages 00:05:15.830 ************************************ 00:05:15.830 13:48:04 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:15.830 * Looking for test storage... 00:05:15.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 2626740 kB' 'MemAvailable: 7397040 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036524 kB' 'Inactive: 3982508 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 130560 kB' 'Active(file): 1035488 kB' 'Inactive(file): 3851948 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 740 kB' 'Writeback: 0 kB' 'AnonPages: 149136 kB' 'Mapped: 68292 kB' 'Shmem: 2600 kB' 'KReclaimable: 204168 kB' 'Slab: 269816 kB' 'SReclaimable: 204168 kB' 'SUnreclaim: 65648 kB' 'KernelStack: 4488 kB' 'PageTables: 4104 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 500860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.830 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.831 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:15.832 13:48:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:15.832 13:48:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.832 13:48:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.832 13:48:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.832 ************************************ 00:05:15.832 START TEST default_setup 00:05:15.832 ************************************ 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.832 13:48:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:16.440 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708168 kB' 'MemAvailable: 9478452 kB' 'Buffers: 35352 kB' 'Cached: 4863744 kB' 'SwapCached: 0 kB' 'Active: 1036584 kB' 'Inactive: 3998860 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146960 kB' 'Active(file): 1035536 kB' 'Inactive(file): 3851900 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 165576 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269492 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65340 kB' 'KernelStack: 4320 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.013 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.014 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708168 kB' 'MemAvailable: 9478452 kB' 'Buffers: 35352 kB' 'Cached: 4863744 kB' 'SwapCached: 0 kB' 'Active: 1036584 kB' 'Inactive: 3998840 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146940 kB' 'Active(file): 1035536 kB' 'Inactive(file): 3851900 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 165544 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269492 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65340 kB' 'KernelStack: 4304 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.015 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.016 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708168 kB' 'MemAvailable: 9478452 kB' 'Buffers: 35352 kB' 'Cached: 4863744 kB' 'SwapCached: 0 kB' 'Active: 1036576 kB' 'Inactive: 3998660 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146760 kB' 'Active(file): 1035536 kB' 'Inactive(file): 3851900 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 165364 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269492 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65340 kB' 'KernelStack: 4336 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.017 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.018 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:17.019 nr_hugepages=1024 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.019 resv_hugepages=0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.019 surplus_hugepages=0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.019 anon_hugepages=0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708420 kB' 'MemAvailable: 9478704 kB' 'Buffers: 35352 kB' 'Cached: 4863744 kB' 'SwapCached: 0 kB' 'Active: 1036576 kB' 'Inactive: 3998692 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146792 kB' 'Active(file): 1035536 kB' 'Inactive(file): 3851900 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 165376 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269492 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65340 kB' 'KernelStack: 4340 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.019 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.020 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4708944 kB' 'MemUsed: 7534032 kB' 'SwapCached: 0 kB' 'Active: 1036576 kB' 'Inactive: 3998952 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 147052 kB' 'Active(file): 1035536 kB' 'Inactive(file): 3851900 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 4899096 kB' 'Mapped: 68192 kB' 'AnonPages: 165636 kB' 'Shmem: 2596 kB' 'KernelStack: 4408 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204152 kB' 'Slab: 269492 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.021 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.022 node0=1024 expecting 1024 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.022 00:05:17.022 real 0m1.160s 00:05:17.022 user 0m0.325s 00:05:17.022 sys 0m0.817s 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.022 13:48:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:17.022 ************************************ 00:05:17.022 END TEST default_setup 00:05:17.022 ************************************ 00:05:17.281 13:48:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:17.281 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.281 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.281 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.281 ************************************ 00:05:17.281 START TEST per_node_1G_alloc 00:05:17.281 ************************************ 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.281 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:17.540 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5755812 kB' 'MemAvailable: 10526104 kB' 'Buffers: 35352 kB' 'Cached: 4863752 kB' 'SwapCached: 0 kB' 'Active: 1036632 kB' 'Inactive: 3999268 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 147404 kB' 'Active(file): 1035580 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 4 kB' 'AnonPages: 166396 kB' 'Mapped: 68232 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269692 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65540 kB' 'KernelStack: 4456 kB' 'PageTables: 3868 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.805 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.806 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5756044 kB' 'MemAvailable: 10526336 kB' 'Buffers: 35352 kB' 'Cached: 4863752 kB' 'SwapCached: 0 kB' 'Active: 1036628 kB' 'Inactive: 3998748 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146884 kB' 'Active(file): 1035580 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 308 kB' 'Writeback: 4 kB' 'AnonPages: 165788 kB' 'Mapped: 68272 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269684 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65532 kB' 'KernelStack: 4356 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.807 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5756044 kB' 'MemAvailable: 10526332 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036628 kB' 'Inactive: 3998612 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146752 kB' 'Active(file): 1035580 kB' 'Inactive(file): 3851860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 165332 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269684 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65532 kB' 'KernelStack: 4304 kB' 'PageTables: 3508 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.808 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.809 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.810 nr_hugepages=512 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:17.810 resv_hugepages=0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.810 surplus_hugepages=0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.810 anon_hugepages=0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5756044 kB' 'MemAvailable: 10526332 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036628 kB' 'Inactive: 3998796 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146936 kB' 'Active(file): 1035580 kB' 'Inactive(file): 3851860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 165516 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269684 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65532 kB' 'KernelStack: 4340 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.810 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.811 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5756044 kB' 'MemUsed: 6486932 kB' 'SwapCached: 0 kB' 'Active: 1036628 kB' 'Inactive: 3998776 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146916 kB' 'Active(file): 1035580 kB' 'Inactive(file): 3851860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 4899100 kB' 'Mapped: 68192 kB' 'AnonPages: 165472 kB' 'Shmem: 2596 kB' 'KernelStack: 4324 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204152 kB' 'Slab: 269684 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.812 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.813 node0=512 expecting 512 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.813 00:05:17.813 real 0m0.756s 00:05:17.813 user 0m0.329s 00:05:17.813 sys 0m0.466s 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.813 13:48:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.813 ************************************ 00:05:17.813 END TEST per_node_1G_alloc 00:05:17.813 ************************************ 00:05:18.073 13:48:06 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:18.073 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.073 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.073 13:48:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.073 ************************************ 00:05:18.073 START TEST even_2G_alloc 00:05:18.073 ************************************ 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.073 13:48:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:18.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.903 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712796 kB' 'MemAvailable: 9483084 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036660 kB' 'Inactive: 3998628 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146800 kB' 'Active(file): 1035612 kB' 'Inactive(file): 3851828 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165408 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269524 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65372 kB' 'KernelStack: 4336 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.904 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712796 kB' 'MemAvailable: 9483084 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036660 kB' 'Inactive: 3998628 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146800 kB' 'Active(file): 1035612 kB' 'Inactive(file): 3851828 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165408 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269524 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65372 kB' 'KernelStack: 4336 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.905 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.906 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712320 kB' 'MemAvailable: 9482608 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036652 kB' 'Inactive: 3998636 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146808 kB' 'Active(file): 1035612 kB' 'Inactive(file): 3851828 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165412 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269524 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65372 kB' 'KernelStack: 4288 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.907 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.908 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:18.909 nr_hugepages=1024 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.909 resv_hugepages=0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.909 surplus_hugepages=0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.909 anon_hugepages=0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712320 kB' 'MemAvailable: 9482608 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036652 kB' 'Inactive: 3998748 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146920 kB' 'Active(file): 1035612 kB' 'Inactive(file): 3851828 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 165548 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269524 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65372 kB' 'KernelStack: 4352 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 517608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.909 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.910 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712572 kB' 'MemUsed: 7530404 kB' 'SwapCached: 0 kB' 'Active: 1036652 kB' 'Inactive: 3998568 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146740 kB' 'Active(file): 1035612 kB' 'Inactive(file): 3851828 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'FilePages: 4899100 kB' 'Mapped: 68192 kB' 'AnonPages: 165616 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204152 kB' 'Slab: 269524 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.911 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.912 node0=1024 expecting 1024 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.912 00:05:18.912 real 0m0.975s 00:05:18.912 user 0m0.352s 00:05:18.912 sys 0m0.660s 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.912 13:48:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:18.912 ************************************ 00:05:18.912 END TEST even_2G_alloc 00:05:18.912 ************************************ 00:05:18.912 13:48:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:18.912 13:48:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.912 13:48:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.912 13:48:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.912 ************************************ 00:05:18.912 START TEST odd_alloc 00:05:18.912 ************************************ 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.912 13:48:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:19.171 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.740 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706112 kB' 'MemAvailable: 9476400 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036664 kB' 'Inactive: 3998668 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146844 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165688 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269612 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65460 kB' 'KernelStack: 4336 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 517740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.741 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706112 kB' 'MemAvailable: 9476400 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036664 kB' 'Inactive: 3998928 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 147104 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165688 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269612 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65460 kB' 'KernelStack: 4336 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 517740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.742 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.004 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.005 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706112 kB' 'MemAvailable: 9476400 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036664 kB' 'Inactive: 3999196 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 147372 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165944 kB' 'Mapped: 68204 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269612 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65460 kB' 'KernelStack: 4336 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 517740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.006 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.007 nr_hugepages=1025 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:20.007 resv_hugepages=0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.007 surplus_hugepages=0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.007 anon_hugepages=0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.007 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706112 kB' 'MemAvailable: 9476400 kB' 'Buffers: 35352 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036656 kB' 'Inactive: 3999052 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 147228 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 165788 kB' 'Mapped: 68192 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269612 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65460 kB' 'KernelStack: 4368 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 517740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.008 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.009 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4706112 kB' 'MemUsed: 7536864 kB' 'SwapCached: 0 kB' 'Active: 1036656 kB' 'Inactive: 3998720 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 146896 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 4899100 kB' 'Mapped: 68192 kB' 'AnonPages: 165456 kB' 'Shmem: 2596 kB' 'KernelStack: 4404 kB' 'PageTables: 3852 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204152 kB' 'Slab: 269612 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.010 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.011 node0=1025 expecting 1025 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:20.011 00:05:20.011 real 0m0.976s 00:05:20.011 user 0m0.282s 00:05:20.011 sys 0m0.728s 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.011 ************************************ 00:05:20.011 END TEST odd_alloc 00:05:20.011 ************************************ 00:05:20.011 13:48:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.011 13:48:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:20.011 13:48:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.011 13:48:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.011 13:48:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.011 ************************************ 00:05:20.011 START TEST custom_alloc 00:05:20.011 ************************************ 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.011 13:48:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:20.270 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5762200 kB' 'MemAvailable: 10532488 kB' 'Buffers: 35360 kB' 'Cached: 4863740 kB' 'SwapCached: 0 kB' 'Active: 1036664 kB' 'Inactive: 3993864 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142040 kB' 'Active(file): 1035616 kB' 'Inactive(file): 3851824 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 160224 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 204152 kB' 'Slab: 269384 kB' 'SReclaimable: 204152 kB' 'SUnreclaim: 65232 kB' 'KernelStack: 4276 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 506276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.527 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.789 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5763480 kB' 'MemAvailable: 10533780 kB' 'Buffers: 35360 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036668 kB' 'Inactive: 3993272 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141452 kB' 'Active(file): 1035628 kB' 'Inactive(file): 3851820 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 376 kB' 'Writeback: 0 kB' 'AnonPages: 160092 kB' 'Mapped: 67332 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269380 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65224 kB' 'KernelStack: 4208 kB' 'PageTables: 3220 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19332 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.790 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5763760 kB' 'MemAvailable: 10534060 kB' 'Buffers: 35360 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036668 kB' 'Inactive: 3993324 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141504 kB' 'Active(file): 1035628 kB' 'Inactive(file): 3851820 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'AnonPages: 160160 kB' 'Mapped: 67332 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269380 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65224 kB' 'KernelStack: 4224 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19332 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.791 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.792 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:20.793 nr_hugepages=512 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.793 resv_hugepages=0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.793 surplus_hugepages=0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.793 anon_hugepages=0 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5763760 kB' 'MemAvailable: 10534060 kB' 'Buffers: 35360 kB' 'Cached: 4863748 kB' 'SwapCached: 0 kB' 'Active: 1036668 kB' 'Inactive: 3993400 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141580 kB' 'Active(file): 1035628 kB' 'Inactive(file): 3851820 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'AnonPages: 160004 kB' 'Mapped: 67332 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269380 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65224 kB' 'KernelStack: 4208 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19316 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.793 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5763760 kB' 'MemUsed: 6479216 kB' 'SwapCached: 0 kB' 'Active: 1036668 kB' 'Inactive: 3993176 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141356 kB' 'Active(file): 1035628 kB' 'Inactive(file): 3851820 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'FilePages: 4899108 kB' 'Mapped: 67332 kB' 'AnonPages: 160068 kB' 'Shmem: 2596 kB' 'KernelStack: 4292 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204156 kB' 'Slab: 269380 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.794 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.795 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.053 node0=512 expecting 512 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.053 00:05:21.053 real 0m0.893s 00:05:21.053 user 0m0.341s 00:05:21.053 sys 0m0.475s 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.053 13:48:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.053 ************************************ 00:05:21.053 END TEST custom_alloc 00:05:21.053 ************************************ 00:05:21.053 13:48:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:21.053 13:48:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.053 13:48:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.053 13:48:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.053 ************************************ 00:05:21.053 START TEST no_shrink_alloc 00:05:21.053 ************************************ 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.053 13:48:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:21.310 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4716372 kB' 'MemAvailable: 9486736 kB' 'Buffers: 35360 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036696 kB' 'Inactive: 3993504 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141640 kB' 'Active(file): 1035648 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 160276 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269180 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4176 kB' 'PageTables: 3152 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.886 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4717128 kB' 'MemAvailable: 9487492 kB' 'Buffers: 35360 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036688 kB' 'Inactive: 3993108 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141244 kB' 'Active(file): 1035648 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 159844 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269180 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4224 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19332 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.887 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.888 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4717128 kB' 'MemAvailable: 9487492 kB' 'Buffers: 35360 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036688 kB' 'Inactive: 3993008 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141144 kB' 'Active(file): 1035648 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 159996 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269180 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4208 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19332 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.889 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.890 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.891 nr_hugepages=1024 00:05:21.891 resv_hugepages=0 00:05:21.891 surplus_hugepages=0 00:05:21.891 anon_hugepages=0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4716876 kB' 'MemAvailable: 9487240 kB' 'Buffers: 35360 kB' 'Cached: 4863812 kB' 'SwapCached: 0 kB' 'Active: 1036688 kB' 'Inactive: 3993052 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141188 kB' 'Active(file): 1035648 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 159808 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269180 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65024 kB' 'KernelStack: 4208 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.891 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.892 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.893 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.152 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4716372 kB' 'MemUsed: 7526604 kB' 'SwapCached: 0 kB' 'Active: 1036688 kB' 'Inactive: 3993132 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141268 kB' 'Active(file): 1035648 kB' 'Inactive(file): 3851864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 4899172 kB' 'Mapped: 67336 kB' 'AnonPages: 160156 kB' 'Shmem: 2596 kB' 'KernelStack: 4224 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204156 kB' 'Slab: 269180 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.153 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.154 node0=1024 expecting 1024 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.154 13:48:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.415 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:22.415 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.415 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4711852 kB' 'MemAvailable: 9482220 kB' 'Buffers: 35360 kB' 'Cached: 4863816 kB' 'SwapCached: 0 kB' 'Active: 1036716 kB' 'Inactive: 3993632 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141780 kB' 'Active(file): 1035664 kB' 'Inactive(file): 3851852 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 160532 kB' 'Mapped: 67428 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269452 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65296 kB' 'KernelStack: 4308 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.416 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.417 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4712360 kB' 'MemAvailable: 9482728 kB' 'Buffers: 35360 kB' 'Cached: 4863816 kB' 'SwapCached: 0 kB' 'Active: 1036704 kB' 'Inactive: 3993328 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141476 kB' 'Active(file): 1035664 kB' 'Inactive(file): 3851852 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 160144 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269356 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65200 kB' 'KernelStack: 4248 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.418 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.419 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4713116 kB' 'MemAvailable: 9483484 kB' 'Buffers: 35360 kB' 'Cached: 4863816 kB' 'SwapCached: 0 kB' 'Active: 1036704 kB' 'Inactive: 3993136 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141284 kB' 'Active(file): 1035664 kB' 'Inactive(file): 3851852 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 160120 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269356 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65200 kB' 'KernelStack: 4208 kB' 'PageTables: 3208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.420 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.421 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.422 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.423 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.681 nr_hugepages=1024 00:05:22.681 resv_hugepages=0 00:05:22.681 surplus_hugepages=0 00:05:22.681 anon_hugepages=0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4713904 kB' 'MemAvailable: 9484272 kB' 'Buffers: 35360 kB' 'Cached: 4863816 kB' 'SwapCached: 0 kB' 'Active: 1036704 kB' 'Inactive: 3993376 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141524 kB' 'Active(file): 1035664 kB' 'Inactive(file): 3851852 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 160156 kB' 'Mapped: 67336 kB' 'Shmem: 2596 kB' 'KReclaimable: 204156 kB' 'Slab: 269356 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65200 kB' 'KernelStack: 4240 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 503488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.681 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.682 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4714392 kB' 'MemUsed: 7528584 kB' 'SwapCached: 0 kB' 'Active: 1036704 kB' 'Inactive: 3993072 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 141220 kB' 'Active(file): 1035664 kB' 'Inactive(file): 3851852 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'FilePages: 4899176 kB' 'Mapped: 67336 kB' 'AnonPages: 160088 kB' 'Shmem: 2596 kB' 'KernelStack: 4244 kB' 'PageTables: 3112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204156 kB' 'Slab: 269356 kB' 'SReclaimable: 204156 kB' 'SUnreclaim: 65200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.683 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:22.684 node0=1024 expecting 1024 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.684 00:05:22.684 real 0m1.680s 00:05:22.684 user 0m0.618s 00:05:22.684 sys 0m0.928s 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.684 13:48:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.684 ************************************ 00:05:22.684 END TEST no_shrink_alloc 00:05:22.684 ************************************ 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.684 13:48:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.684 ************************************ 00:05:22.684 END TEST hugepages 00:05:22.684 ************************************ 00:05:22.684 00:05:22.684 real 0m6.881s 00:05:22.684 user 0m2.452s 00:05:22.684 sys 0m4.288s 00:05:22.684 13:48:11 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.684 13:48:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.684 13:48:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.684 13:48:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.684 13:48:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.684 13:48:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.685 ************************************ 00:05:22.685 START TEST driver 00:05:22.685 ************************************ 00:05:22.685 13:48:11 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.941 * Looking for test storage... 00:05:22.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.941 13:48:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:22.941 13:48:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.941 13:48:11 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.198 13:48:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:23.198 13:48:12 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.198 13:48:12 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.198 13:48:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:23.198 ************************************ 00:05:23.198 START TEST guess_driver 00:05:23.198 ************************************ 00:05:23.198 13:48:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:23.198 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:23.198 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:23.199 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:23.199 Looking for driver=uio_pci_generic 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.199 13:48:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:23.764 13:48:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.696 13:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:24.696 13:48:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:24.696 13:48:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.696 13:48:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.260 00:05:25.260 real 0m1.901s 00:05:25.260 user 0m0.447s 00:05:25.260 sys 0m1.478s 00:05:25.260 13:48:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.260 ************************************ 00:05:25.260 END TEST guess_driver 00:05:25.260 13:48:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:25.260 ************************************ 00:05:25.260 00:05:25.260 real 0m2.454s 00:05:25.260 user 0m0.727s 00:05:25.260 sys 0m1.767s 00:05:25.260 13:48:14 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.260 13:48:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:25.260 ************************************ 00:05:25.260 END TEST driver 00:05:25.260 ************************************ 00:05:25.260 13:48:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:25.260 13:48:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.260 13:48:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.260 13:48:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.260 ************************************ 00:05:25.260 START TEST devices 00:05:25.260 ************************************ 00:05:25.260 13:48:14 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:25.260 * Looking for test storage... 00:05:25.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.260 13:48:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:25.260 13:48:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:25.260 13:48:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.260 13:48:14 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:25.863 13:48:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:25.863 No valid GPT data, bailing 00:05:25.863 13:48:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.863 13:48:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:25.863 13:48:14 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:25.863 13:48:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:25.863 13:48:14 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.864 13:48:14 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.864 13:48:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.864 ************************************ 00:05:25.864 START TEST nvme_mount 00:05:25.864 ************************************ 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.864 13:48:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:26.796 Creating new GPT entries in memory. 00:05:26.796 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.796 other utilities. 00:05:26.796 13:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.796 13:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.796 13:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.796 13:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.796 13:48:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.171 Creating new GPT entries in memory. 00:05:28.171 The operation has completed successfully. 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 104589 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.171 13:48:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:28.171 13:48:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:29.547 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.547 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.547 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.547 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.547 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:29.547 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.804 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:29.804 13:48:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.740 13:48:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.998 13:48:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.998 13:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.998 13:48:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.373 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.373 00:05:32.373 real 0m6.316s 00:05:32.373 user 0m0.714s 00:05:32.373 sys 0m3.618s 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.373 13:48:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:32.373 ************************************ 00:05:32.373 END TEST nvme_mount 00:05:32.373 ************************************ 00:05:32.373 13:48:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:32.373 13:48:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.373 13:48:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.373 13:48:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.373 ************************************ 00:05:32.373 START TEST dm_mount 00:05:32.373 ************************************ 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:32.373 13:48:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:33.305 Creating new GPT entries in memory. 00:05:33.305 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:33.305 other utilities. 00:05:33.305 13:48:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:33.305 13:48:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.305 13:48:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.305 13:48:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.305 13:48:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:34.239 Creating new GPT entries in memory. 00:05:34.239 The operation has completed successfully. 00:05:34.239 13:48:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:34.239 13:48:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.239 13:48:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.239 13:48:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.239 13:48:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:35.185 The operation has completed successfully. 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 105078 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.185 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.443 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.444 13:48:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:35.705 13:48:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.649 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.649 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:36.649 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:36.907 13:48:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.165 13:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:37.165 13:48:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.098 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:38.099 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:38.099 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.099 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:38.355 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:38.355 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:38.355 13:48:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:38.355 00:05:38.355 real 0m6.030s 00:05:38.355 user 0m0.430s 00:05:38.355 sys 0m2.514s 00:05:38.355 13:48:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.355 13:48:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:38.355 ************************************ 00:05:38.355 END TEST dm_mount 00:05:38.355 ************************************ 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.355 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.355 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:38.355 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:38.355 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:38.355 13:48:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:38.355 00:05:38.355 real 0m13.090s 00:05:38.355 user 0m1.561s 00:05:38.355 sys 0m6.447s 00:05:38.355 13:48:27 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.355 13:48:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:38.355 ************************************ 00:05:38.355 END TEST devices 00:05:38.355 ************************************ 00:05:38.355 00:05:38.355 real 0m27.678s 00:05:38.355 user 0m6.597s 00:05:38.355 sys 0m16.004s 00:05:38.355 13:48:27 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.355 13:48:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:38.355 ************************************ 00:05:38.355 END TEST setup.sh 00:05:38.355 ************************************ 00:05:38.355 13:48:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:38.917 Hugepages 00:05:38.917 node hugesize free / total 00:05:38.917 node0 1048576kB 0 / 0 00:05:38.917 node0 2048kB 2048 / 2048 00:05:38.917 00:05:38.917 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:38.917 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:38.917 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:38.917 13:48:27 -- spdk/autotest.sh@130 -- # uname -s 00:05:38.917 13:48:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:38.917 13:48:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:38.917 13:48:27 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:39.481 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.451 13:48:29 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:41.823 13:48:30 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:41.823 13:48:30 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:41.823 13:48:30 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:41.823 13:48:30 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:41.823 13:48:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:41.823 13:48:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:41.823 13:48:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.823 13:48:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.823 13:48:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:41.823 13:48:30 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:41.823 13:48:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:41.823 13:48:30 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:41.823 Waiting for block devices as requested 00:05:42.079 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.079 13:48:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:42.079 13:48:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.079 13:48:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:42.079 13:48:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:42.079 13:48:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:42.079 13:48:31 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:42.079 13:48:31 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:42.079 13:48:31 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:42.079 13:48:31 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:42.079 13:48:31 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:42.079 13:48:31 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:42.079 13:48:31 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:42.079 13:48:31 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:42.079 13:48:31 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:42.079 13:48:31 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:42.079 13:48:31 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:42.079 13:48:31 -- common/autotest_common.sh@1557 -- # continue 00:05:42.079 13:48:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:42.079 13:48:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.079 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.079 13:48:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:42.079 13:48:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.079 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:05:42.079 13:48:31 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:42.641 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.577 13:48:32 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:43.577 13:48:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.577 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:05:43.852 13:48:32 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:43.852 13:48:32 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:43.852 13:48:32 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.852 13:48:32 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:43.852 13:48:32 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:43.852 13:48:32 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:43.852 13:48:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:43.852 13:48:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:43.852 13:48:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.852 13:48:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.852 13:48:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:43.852 13:48:32 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:43.852 13:48:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:43.852 13:48:32 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:43.852 13:48:32 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.852 13:48:32 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:43.852 13:48:32 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.852 13:48:32 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:43.852 13:48:32 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:43.852 13:48:32 -- common/autotest_common.sh@1593 -- # return 0 00:05:43.852 13:48:32 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:43.852 13:48:32 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:43.852 13:48:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.852 13:48:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.852 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:05:43.852 ************************************ 00:05:43.852 START TEST unittest 00:05:43.852 ************************************ 00:05:43.852 13:48:32 unittest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:43.852 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:43.852 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:43.852 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:43.852 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:43.852 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:43.852 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:43.852 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:43.852 ++ rpc_py=rpc_cmd 00:05:43.852 ++ set -e 00:05:43.852 ++ shopt -s nullglob 00:05:43.852 ++ shopt -s extglob 00:05:43.852 ++ shopt -s inherit_errexit 00:05:43.852 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:43.852 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:43.852 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:43.852 +++ CONFIG_WPDK_DIR= 00:05:43.852 +++ CONFIG_ASAN=y 00:05:43.852 +++ CONFIG_VBDEV_COMPRESS=n 00:05:43.852 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:43.852 +++ CONFIG_USDT=n 00:05:43.852 +++ CONFIG_CUSTOMOCF=n 00:05:43.852 +++ CONFIG_PREFIX=/usr/local 00:05:43.852 +++ CONFIG_RBD=n 00:05:43.852 +++ CONFIG_LIBDIR= 00:05:43.852 +++ CONFIG_IDXD=y 00:05:43.852 +++ CONFIG_NVME_CUSE=y 00:05:43.852 +++ CONFIG_SMA=n 00:05:43.852 +++ CONFIG_VTUNE=n 00:05:43.852 +++ CONFIG_TSAN=n 00:05:43.852 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:43.852 +++ CONFIG_VFIO_USER_DIR= 00:05:43.852 +++ CONFIG_PGO_CAPTURE=n 00:05:43.852 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:43.852 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:43.852 +++ CONFIG_LTO=n 00:05:43.852 +++ CONFIG_ISCSI_INITIATOR=y 00:05:43.852 +++ CONFIG_CET=n 00:05:43.852 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:43.852 +++ CONFIG_OCF_PATH= 00:05:43.853 +++ CONFIG_RDMA_SET_TOS=y 00:05:43.853 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:43.853 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:43.853 +++ CONFIG_UBLK=n 00:05:43.853 +++ CONFIG_ISAL_CRYPTO=y 00:05:43.853 +++ CONFIG_OPENSSL_PATH= 00:05:43.853 +++ CONFIG_OCF=n 00:05:43.853 +++ CONFIG_FUSE=n 00:05:43.853 +++ CONFIG_VTUNE_DIR= 00:05:43.853 +++ CONFIG_FUZZER_LIB= 00:05:43.853 +++ CONFIG_FUZZER=n 00:05:43.853 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:43.853 +++ CONFIG_CRYPTO=n 00:05:43.853 +++ CONFIG_PGO_USE=n 00:05:43.853 +++ CONFIG_VHOST=y 00:05:43.853 +++ CONFIG_DAOS=n 00:05:43.853 +++ CONFIG_DPDK_INC_DIR= 00:05:43.853 +++ CONFIG_DAOS_DIR= 00:05:43.853 +++ CONFIG_UNIT_TESTS=y 00:05:43.853 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:43.853 +++ CONFIG_VIRTIO=y 00:05:43.853 +++ CONFIG_DPDK_UADK=n 00:05:43.853 +++ CONFIG_COVERAGE=y 00:05:43.853 +++ CONFIG_RDMA=y 00:05:43.853 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:43.853 +++ CONFIG_URING_PATH= 00:05:43.853 +++ CONFIG_XNVME=n 00:05:43.853 +++ CONFIG_VFIO_USER=n 00:05:43.853 +++ CONFIG_ARCH=native 00:05:43.853 +++ CONFIG_HAVE_EVP_MAC=y 00:05:43.853 +++ CONFIG_URING_ZNS=n 00:05:43.853 +++ CONFIG_WERROR=y 00:05:43.853 +++ CONFIG_HAVE_LIBBSD=n 00:05:43.853 +++ CONFIG_UBSAN=y 00:05:43.853 +++ CONFIG_IPSEC_MB_DIR= 00:05:43.853 +++ CONFIG_GOLANG=n 00:05:43.853 +++ CONFIG_ISAL=y 00:05:43.853 +++ CONFIG_IDXD_KERNEL=n 00:05:43.853 +++ CONFIG_DPDK_LIB_DIR= 00:05:43.853 +++ CONFIG_RDMA_PROV=verbs 00:05:43.853 +++ CONFIG_APPS=y 00:05:43.853 +++ CONFIG_SHARED=n 00:05:43.853 +++ CONFIG_HAVE_KEYUTILS=y 00:05:43.853 +++ CONFIG_FC_PATH= 00:05:43.853 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:43.853 +++ CONFIG_FC=n 00:05:43.853 +++ CONFIG_AVAHI=n 00:05:43.853 +++ CONFIG_FIO_PLUGIN=y 00:05:43.853 +++ CONFIG_RAID5F=y 00:05:43.853 +++ CONFIG_EXAMPLES=y 00:05:43.853 +++ CONFIG_TESTS=y 00:05:43.853 +++ CONFIG_CRYPTO_MLX5=n 00:05:43.853 +++ CONFIG_MAX_LCORES=128 00:05:43.853 +++ CONFIG_IPSEC_MB=n 00:05:43.853 +++ CONFIG_PGO_DIR= 00:05:43.853 +++ CONFIG_DEBUG=y 00:05:43.853 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:43.853 +++ CONFIG_CROSS_PREFIX= 00:05:43.853 +++ CONFIG_URING=n 00:05:43.853 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:43.853 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:43.853 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:43.853 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:43.853 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:43.853 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:43.853 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:43.853 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:43.853 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:43.853 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:43.853 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:43.853 +++ VHOST_APP=("$_app_dir/vhost") 00:05:43.853 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:43.853 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:43.853 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:43.853 +++ [[ #ifndef SPDK_CONFIG_H 00:05:43.853 #define SPDK_CONFIG_H 00:05:43.853 #define SPDK_CONFIG_APPS 1 00:05:43.853 #define SPDK_CONFIG_ARCH native 00:05:43.853 #define SPDK_CONFIG_ASAN 1 00:05:43.853 #undef SPDK_CONFIG_AVAHI 00:05:43.853 #undef SPDK_CONFIG_CET 00:05:43.853 #define SPDK_CONFIG_COVERAGE 1 00:05:43.853 #define SPDK_CONFIG_CROSS_PREFIX 00:05:43.853 #undef SPDK_CONFIG_CRYPTO 00:05:43.853 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:43.853 #undef SPDK_CONFIG_CUSTOMOCF 00:05:43.853 #undef SPDK_CONFIG_DAOS 00:05:43.853 #define SPDK_CONFIG_DAOS_DIR 00:05:43.853 #define SPDK_CONFIG_DEBUG 1 00:05:43.853 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:43.853 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:43.853 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:43.853 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:43.853 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:43.853 #undef SPDK_CONFIG_DPDK_UADK 00:05:43.853 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:43.853 #define SPDK_CONFIG_EXAMPLES 1 00:05:43.853 #undef SPDK_CONFIG_FC 00:05:43.853 #define SPDK_CONFIG_FC_PATH 00:05:43.853 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:43.853 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:43.853 #undef SPDK_CONFIG_FUSE 00:05:43.853 #undef SPDK_CONFIG_FUZZER 00:05:43.853 #define SPDK_CONFIG_FUZZER_LIB 00:05:43.853 #undef SPDK_CONFIG_GOLANG 00:05:43.853 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:43.853 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:43.853 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:43.853 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:43.853 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:43.853 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:43.853 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:43.853 #define SPDK_CONFIG_IDXD 1 00:05:43.853 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:43.853 #undef SPDK_CONFIG_IPSEC_MB 00:05:43.853 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:43.853 #define SPDK_CONFIG_ISAL 1 00:05:43.853 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:43.853 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:43.853 #define SPDK_CONFIG_LIBDIR 00:05:43.853 #undef SPDK_CONFIG_LTO 00:05:43.853 #define SPDK_CONFIG_MAX_LCORES 128 00:05:43.853 #define SPDK_CONFIG_NVME_CUSE 1 00:05:43.853 #undef SPDK_CONFIG_OCF 00:05:43.853 #define SPDK_CONFIG_OCF_PATH 00:05:43.853 #define SPDK_CONFIG_OPENSSL_PATH 00:05:43.853 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:43.853 #define SPDK_CONFIG_PGO_DIR 00:05:43.853 #undef SPDK_CONFIG_PGO_USE 00:05:43.853 #define SPDK_CONFIG_PREFIX /usr/local 00:05:43.853 #define SPDK_CONFIG_RAID5F 1 00:05:43.853 #undef SPDK_CONFIG_RBD 00:05:43.853 #define SPDK_CONFIG_RDMA 1 00:05:43.853 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:43.853 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:43.853 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:43.853 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:43.853 #undef SPDK_CONFIG_SHARED 00:05:43.853 #undef SPDK_CONFIG_SMA 00:05:43.853 #define SPDK_CONFIG_TESTS 1 00:05:43.853 #undef SPDK_CONFIG_TSAN 00:05:43.853 #undef SPDK_CONFIG_UBLK 00:05:43.853 #define SPDK_CONFIG_UBSAN 1 00:05:43.853 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:43.853 #undef SPDK_CONFIG_URING 00:05:43.853 #define SPDK_CONFIG_URING_PATH 00:05:43.853 #undef SPDK_CONFIG_URING_ZNS 00:05:43.853 #undef SPDK_CONFIG_USDT 00:05:43.853 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:43.853 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:43.853 #undef SPDK_CONFIG_VFIO_USER 00:05:43.853 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:43.853 #define SPDK_CONFIG_VHOST 1 00:05:43.853 #define SPDK_CONFIG_VIRTIO 1 00:05:43.853 #undef SPDK_CONFIG_VTUNE 00:05:43.853 #define SPDK_CONFIG_VTUNE_DIR 00:05:43.853 #define SPDK_CONFIG_WERROR 1 00:05:43.853 #define SPDK_CONFIG_WPDK_DIR 00:05:43.853 #undef SPDK_CONFIG_XNVME 00:05:43.853 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:43.853 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:43.853 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.853 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:43.853 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.853 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.853 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:43.853 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:43.853 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:43.853 ++++ export PATH 00:05:43.853 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:43.853 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:43.853 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:43.853 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:43.853 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:43.853 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:43.853 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:43.853 +++ TEST_TAG=N/A 00:05:43.853 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:43.853 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:43.853 ++++ uname -s 00:05:43.853 +++ PM_OS=Linux 00:05:43.853 +++ MONITOR_RESOURCES_SUDO=() 00:05:43.853 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:43.853 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:43.853 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:43.853 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:43.853 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:43.853 +++ SUDO[0]= 00:05:43.853 +++ SUDO[1]='sudo -E' 00:05:43.853 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:43.853 +++ [[ Linux == FreeBSD ]] 00:05:43.853 +++ [[ Linux == Linux ]] 00:05:43.853 +++ [[ QEMU != QEMU ]] 00:05:43.853 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:43.853 ++ : 0 00:05:43.853 ++ export RUN_NIGHTLY 00:05:43.853 ++ : 0 00:05:43.854 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_RUN_VALGRIND 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_TEST_UNITTEST 00:05:43.854 ++ : 00:05:43.854 ++ export SPDK_TEST_AUTOBUILD 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_RELEASE_BUILD 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ISAL 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ISCSI 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_TEST_NVME 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVME_PMR 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVME_BP 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVME_CLI 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVME_CUSE 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVME_FDP 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVMF 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VFIOUSER 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_FUZZER 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_FUZZER_SHORT 00:05:43.854 ++ : rdma 00:05:43.854 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_RBD 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VHOST 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_TEST_BLOCKDEV 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_IOAT 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_BLOBFS 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VHOST_INIT 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_LVOL 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_RUN_ASAN 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_RUN_UBSAN 00:05:43.854 ++ : 00:05:43.854 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_RUN_NON_ROOT 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_CRYPTO 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_FTL 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_OCF 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_VMD 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_OPAL 00:05:43.854 ++ : 00:05:43.854 ++ export SPDK_TEST_NATIVE_DPDK 00:05:43.854 ++ : true 00:05:43.854 ++ export SPDK_AUTOTEST_X 00:05:43.854 ++ : 1 00:05:43.854 ++ export SPDK_TEST_RAID5 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_URING 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_USDT 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_USE_IGB_UIO 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_SCHEDULER 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_SCANBUILD 00:05:43.854 ++ : 00:05:43.854 ++ export SPDK_TEST_NVMF_NICS 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_SMA 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_DAOS 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_XNVME 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ACCEL 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ACCEL_DSA 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_ACCEL_IAA 00:05:43.854 ++ : 00:05:43.854 ++ export SPDK_TEST_FUZZER_TARGET 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_TEST_NVMF_MDNS 00:05:43.854 ++ : 0 00:05:43.854 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:43.854 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:43.854 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:43.854 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:43.854 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:43.854 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:43.854 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:43.854 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:43.854 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:43.854 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.854 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.854 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:43.854 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:43.854 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:43.854 ++ PYTHONDONTWRITEBYTECODE=1 00:05:43.854 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.854 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.854 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.854 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.854 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:43.854 ++ rm -rf /var/tmp/asan_suppression_file 00:05:43.854 ++ cat 00:05:43.854 ++ echo leak:libfuse3.so 00:05:43.854 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.854 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.854 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.854 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.854 ++ '[' -z /var/spdk/dependencies ']' 00:05:43.854 ++ export DEPENDENCY_DIR 00:05:43.854 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:43.854 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:43.854 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:43.854 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:43.854 ++ export QEMU_BIN= 00:05:43.854 ++ QEMU_BIN= 00:05:43.854 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:43.854 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:43.854 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:43.854 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:43.854 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.854 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.854 ++ '[' 0 -eq 0 ']' 00:05:43.854 ++ export valgrind= 00:05:43.854 ++ valgrind= 00:05:43.854 +++ uname -s 00:05:43.854 ++ '[' Linux = Linux ']' 00:05:43.854 ++ HUGEMEM=4096 00:05:43.854 ++ export CLEAR_HUGE=yes 00:05:43.854 ++ CLEAR_HUGE=yes 00:05:43.854 ++ [[ 0 -eq 1 ]] 00:05:43.854 ++ [[ 0 -eq 1 ]] 00:05:43.854 ++ MAKE=make 00:05:43.854 +++ nproc 00:05:43.854 ++ MAKEFLAGS=-j10 00:05:43.854 ++ export HUGEMEM=4096 00:05:43.854 ++ HUGEMEM=4096 00:05:43.854 ++ NO_HUGE=() 00:05:43.854 ++ TEST_MODE= 00:05:43.854 ++ [[ -z '' ]] 00:05:43.854 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:43.854 ++ exec 00:05:43.854 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:43.854 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:43.854 ++ set_test_storage 2147483648 00:05:43.854 ++ [[ -v testdir ]] 00:05:43.854 ++ local requested_size=2147483648 00:05:43.854 ++ local mount target_dir 00:05:43.854 ++ local -A mounts fss sizes avails uses 00:05:43.854 ++ local source fs size avail mount use 00:05:43.854 ++ local storage_fallback storage_candidates 00:05:43.854 +++ mktemp -udt spdk.XXXXXX 00:05:43.854 ++ storage_fallback=/tmp/spdk.6c5eCq 00:05:43.854 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:43.854 ++ [[ -n '' ]] 00:05:43.854 ++ [[ -n '' ]] 00:05:43.854 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.6c5eCq/tests/unit /tmp/spdk.6c5eCq 00:05:43.854 ++ requested_size=2214592512 00:05:43.854 ++ read -r source fs size use avail _ mount 00:05:43.854 +++ df -T 00:05:43.854 +++ grep -v Filesystem 00:05:43.854 ++ mounts["$mount"]=tmpfs 00:05:43.854 ++ fss["$mount"]=tmpfs 00:05:43.854 ++ avails["$mount"]=1252601856 00:05:43.854 ++ sizes["$mount"]=1253683200 00:05:43.854 ++ uses["$mount"]=1081344 00:05:43.854 ++ read -r source fs size use avail _ mount 00:05:43.854 ++ mounts["$mount"]=/dev/vda1 00:05:43.854 ++ fss["$mount"]=ext4 00:05:43.854 ++ avails["$mount"]=10127863808 00:05:43.854 ++ sizes["$mount"]=20616794112 00:05:43.854 ++ uses["$mount"]=10472153088 00:05:43.854 ++ read -r source fs size use avail _ mount 00:05:43.854 ++ mounts["$mount"]=tmpfs 00:05:43.854 ++ fss["$mount"]=tmpfs 00:05:43.854 ++ avails["$mount"]=6268403712 00:05:43.854 ++ sizes["$mount"]=6268403712 00:05:43.854 ++ uses["$mount"]=0 00:05:43.854 ++ read -r source fs size use avail _ mount 00:05:43.854 ++ mounts["$mount"]=tmpfs 00:05:43.854 ++ fss["$mount"]=tmpfs 00:05:43.854 ++ avails["$mount"]=5242880 00:05:43.854 ++ sizes["$mount"]=5242880 00:05:43.854 ++ uses["$mount"]=0 00:05:43.854 ++ read -r source fs size use avail _ mount 00:05:43.854 ++ mounts["$mount"]=/dev/vda15 00:05:43.854 ++ fss["$mount"]=vfat 00:05:43.855 ++ avails["$mount"]=103061504 00:05:43.855 ++ sizes["$mount"]=109395968 00:05:43.855 ++ uses["$mount"]=6334464 00:05:43.855 ++ read -r source fs size use avail _ mount 00:05:43.855 ++ mounts["$mount"]=tmpfs 00:05:43.855 ++ fss["$mount"]=tmpfs 00:05:43.855 ++ avails["$mount"]=1253675008 00:05:43.855 ++ sizes["$mount"]=1253679104 00:05:43.855 ++ uses["$mount"]=4096 00:05:43.855 ++ read -r source fs size use avail _ mount 00:05:43.855 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_3/ubuntu2204-libvirt/output 00:05:43.855 ++ fss["$mount"]=fuse.sshfs 00:05:43.855 ++ avails["$mount"]=95077183488 00:05:43.855 ++ sizes["$mount"]=105088212992 00:05:43.855 ++ uses["$mount"]=4625596416 00:05:43.855 ++ read -r source fs size use avail _ mount 00:05:43.855 ++ printf '* Looking for test storage...\n' 00:05:43.855 * Looking for test storage... 00:05:43.855 ++ local target_space new_size 00:05:43.855 ++ for target_dir in "${storage_candidates[@]}" 00:05:43.855 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:43.855 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:43.855 ++ mount=/ 00:05:43.855 ++ target_space=10127863808 00:05:43.855 ++ (( target_space == 0 || target_space < requested_size )) 00:05:43.855 ++ (( target_space >= requested_size )) 00:05:43.855 ++ [[ ext4 == tmpfs ]] 00:05:43.855 ++ [[ ext4 == ramfs ]] 00:05:43.855 ++ [[ / == / ]] 00:05:43.855 ++ new_size=12686745600 00:05:43.855 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:43.855 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:43.855 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:43.855 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:43.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:43.855 ++ return 0 00:05:43.855 ++ set -o errtrace 00:05:43.855 ++ shopt -s extdebug 00:05:43.855 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:43.855 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@1687 -- # true 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@29 -- # exec 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:43.855 13:48:32 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:43.855 --rc lcov_branch_coverage=1 00:05:43.855 --rc lcov_function_coverage=1 00:05:43.855 --rc genhtml_branch_coverage=1 00:05:43.855 --rc genhtml_function_coverage=1 00:05:43.855 --rc genhtml_legend=1 00:05:43.855 --rc geninfo_all_blocks=1 00:05:43.855 ' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:43.855 --rc lcov_branch_coverage=1 00:05:43.855 --rc lcov_function_coverage=1 00:05:43.855 --rc genhtml_branch_coverage=1 00:05:43.855 --rc genhtml_function_coverage=1 00:05:43.855 --rc genhtml_legend=1 00:05:43.855 --rc geninfo_all_blocks=1 00:05:43.855 ' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:43.855 --rc lcov_branch_coverage=1 00:05:43.855 --rc lcov_function_coverage=1 00:05:43.855 --rc genhtml_branch_coverage=1 00:05:43.855 --rc genhtml_function_coverage=1 00:05:43.855 --rc genhtml_legend=1 00:05:43.855 --rc geninfo_all_blocks=1 00:05:43.855 --no-external' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:43.855 --rc lcov_branch_coverage=1 00:05:43.855 --rc lcov_function_coverage=1 00:05:43.855 --rc genhtml_branch_coverage=1 00:05:43.855 --rc genhtml_function_coverage=1 00:05:43.855 --rc genhtml_legend=1 00:05:43.855 --rc geninfo_all_blocks=1 00:05:43.855 --no-external' 00:05:43.855 13:48:32 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:50.408 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:50.408 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:37.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:37.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:37.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:37.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:37.082 13:49:23 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:37.082 13:49:23 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:37.082 13:49:23 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:37.082 13:49:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.082 13:49:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.082 13:49:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:37.082 ************************************ 00:06:37.082 START TEST unittest_pci_event 00:06:37.082 ************************************ 00:06:37.082 13:49:23 unittest.unittest_pci_event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:37.082 00:06:37.082 00:06:37.082 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.082 http://cunit.sourceforge.net/ 00:06:37.082 00:06:37.082 00:06:37.082 Suite: pci_event 00:06:37.082 Test: test_pci_parse_event ...[2024-07-25 13:49:23.570573] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:37.082 [2024-07-25 13:49:23.572781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:37.082 passed 00:06:37.082 00:06:37.082 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.082 suites 1 1 n/a 0 0 00:06:37.082 tests 1 1 1 0 0 00:06:37.082 asserts 15 15 15 0 n/a 00:06:37.082 00:06:37.082 Elapsed time = 0.001 seconds 00:06:37.082 00:06:37.082 real 0m0.047s 00:06:37.082 user 0m0.032s 00:06:37.082 sys 0m0.008s 00:06:37.082 13:49:23 unittest.unittest_pci_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.082 13:49:23 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:37.082 ************************************ 00:06:37.082 END TEST unittest_pci_event 00:06:37.082 ************************************ 00:06:37.083 13:49:23 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:37.083 ************************************ 00:06:37.083 START TEST unittest_include 00:06:37.083 ************************************ 00:06:37.083 13:49:23 unittest.unittest_include -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:37.083 00:06:37.083 00:06:37.083 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.083 http://cunit.sourceforge.net/ 00:06:37.083 00:06:37.083 00:06:37.083 Suite: histogram 00:06:37.083 Test: histogram_test ...passed 00:06:37.083 Test: histogram_merge ...passed 00:06:37.083 00:06:37.083 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.083 suites 1 1 n/a 0 0 00:06:37.083 tests 2 2 2 0 0 00:06:37.083 asserts 50 50 50 0 n/a 00:06:37.083 00:06:37.083 Elapsed time = 0.006 seconds 00:06:37.083 00:06:37.083 real 0m0.036s 00:06:37.083 user 0m0.019s 00:06:37.083 sys 0m0.017s 00:06:37.083 13:49:23 unittest.unittest_include -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.083 13:49:23 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:37.083 ************************************ 00:06:37.083 END TEST unittest_include 00:06:37.083 ************************************ 00:06:37.083 13:49:23 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.083 13:49:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:37.083 ************************************ 00:06:37.083 START TEST unittest_bdev 00:06:37.083 ************************************ 00:06:37.083 13:49:23 unittest.unittest_bdev -- common/autotest_common.sh@1125 -- # unittest_bdev 00:06:37.083 13:49:23 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:37.083 00:06:37.083 00:06:37.083 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.083 http://cunit.sourceforge.net/ 00:06:37.083 00:06:37.083 00:06:37.083 Suite: bdev 00:06:37.083 Test: bytes_to_blocks_test ...passed 00:06:37.083 Test: num_blocks_test ...passed 00:06:37.083 Test: io_valid_test ...passed 00:06:37.083 Test: open_write_test ...[2024-07-25 13:49:23.838920] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:37.083 [2024-07-25 13:49:23.839279] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:37.083 [2024-07-25 13:49:23.839431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:37.083 passed 00:06:37.083 Test: claim_test ...passed 00:06:37.083 Test: alias_add_del_test ...[2024-07-25 13:49:23.967333] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:37.083 [2024-07-25 13:49:23.967479] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:37.083 [2024-07-25 13:49:23.967540] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:37.083 passed 00:06:37.083 Test: get_device_stat_test ...passed 00:06:37.083 Test: bdev_io_types_test ...passed 00:06:37.083 Test: bdev_io_wait_test ...passed 00:06:37.083 Test: bdev_io_spans_split_test ...passed 00:06:37.083 Test: bdev_io_boundary_split_test ...passed 00:06:37.083 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-25 13:49:24.135401] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:37.083 passed 00:06:37.083 Test: bdev_io_mix_split_test ...passed 00:06:37.083 Test: bdev_io_split_with_io_wait ...passed 00:06:37.083 Test: bdev_io_write_unit_split_test ...[2024-07-25 13:49:24.248206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:37.083 [2024-07-25 13:49:24.248349] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:37.083 [2024-07-25 13:49:24.248382] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:37.083 [2024-07-25 13:49:24.248431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:37.083 passed 00:06:37.083 Test: bdev_io_alignment_with_boundary ...passed 00:06:37.083 Test: bdev_io_alignment ...passed 00:06:37.083 Test: bdev_histograms ...passed 00:06:37.083 Test: bdev_write_zeroes ...passed 00:06:37.083 Test: bdev_compare_and_write ...passed 00:06:37.083 Test: bdev_compare ...passed 00:06:37.083 Test: bdev_compare_emulated ...passed 00:06:37.083 Test: bdev_zcopy_write ...passed 00:06:37.083 Test: bdev_zcopy_read ...passed 00:06:37.083 Test: bdev_open_while_hotremove ...passed 00:06:37.083 Test: bdev_close_while_hotremove ...passed 00:06:37.083 Test: bdev_open_ext_test ...[2024-07-25 13:49:24.712559] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:37.083 passed 00:06:37.083 Test: bdev_open_ext_unregister ...[2024-07-25 13:49:24.712842] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:37.083 passed 00:06:37.083 Test: bdev_set_io_timeout ...passed 00:06:37.083 Test: bdev_set_qd_sampling ...passed 00:06:37.083 Test: lba_range_overlap ...passed 00:06:37.083 Test: lock_lba_range_check_ranges ...passed 00:06:37.083 Test: lock_lba_range_with_io_outstanding ...passed 00:06:37.083 Test: lock_lba_range_overlapped ...passed 00:06:37.083 Test: bdev_quiesce ...[2024-07-25 13:49:24.923700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:37.083 passed 00:06:37.083 Test: bdev_io_abort ...passed 00:06:37.083 Test: bdev_unmap ...passed 00:06:37.083 Test: bdev_write_zeroes_split_test ...passed 00:06:37.083 Test: bdev_set_options_test ...passed 00:06:37.083 Test: bdev_get_memory_domains ...passed 00:06:37.083 Test: bdev_io_ext ...[2024-07-25 13:49:25.068803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:37.083 passed 00:06:37.083 Test: bdev_io_ext_no_opts ...passed 00:06:37.083 Test: bdev_io_ext_invalid_opts ...passed 00:06:37.083 Test: bdev_io_ext_split ...passed 00:06:37.083 Test: bdev_io_ext_bounce_buffer ...passed 00:06:37.083 Test: bdev_register_uuid_alias ...[2024-07-25 13:49:25.291397] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 43f12518-65f7-42e5-a799-abf8387a0dfe already exists 00:06:37.083 [2024-07-25 13:49:25.291491] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:43f12518-65f7-42e5-a799-abf8387a0dfe alias for bdev bdev0 00:06:37.083 passed 00:06:37.083 Test: bdev_unregister_by_name ...[2024-07-25 13:49:25.312470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:37.083 passed 00:06:37.083 Test: for_each_bdev_test ...[2024-07-25 13:49:25.312552] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8015:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:37.083 passed 00:06:37.083 Test: bdev_seek_test ...passed 00:06:37.083 Test: bdev_copy ...passed 00:06:37.083 Test: bdev_copy_split_test ...passed 00:06:37.083 Test: examine_locks ...passed 00:06:37.083 Test: claim_v2_rwo ...[2024-07-25 13:49:25.427202] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427310] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427338] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427469] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8736:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:37.083 passed 00:06:37.083 Test: claim_v2_rom ...[2024-07-25 13:49:25.427626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427679] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427701] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.427758] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:37.083 [2024-07-25 13:49:25.427803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:37.083 passed 00:06:37.083 Test: claim_v2_rwm ...[2024-07-25 13:49:25.427919] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:37.083 [2024-07-25 13:49:25.427974] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:37.083 [2024-07-25 13:49:25.428004] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428133] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:37.084 passed 00:06:37.084 Test: claim_v2_existing_writer ...passed 00:06:37.084 Test: claim_v2_existing_v1 ...[2024-07-25 13:49:25.428302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:37.084 [2024-07-25 13:49:25.428337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:37.084 [2024-07-25 13:49:25.428453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428495] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428515] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:37.084 passed 00:06:37.084 Test: claim_v1_existing_v2 ...[2024-07-25 13:49:25.428640] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.428692] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:37.084 passed 00:06:37.084 Test: examine_claimed ...[2024-07-25 13:49:25.428728] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:37.084 [2024-07-25 13:49:25.429010] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:37.084 passed 00:06:37.084 00:06:37.084 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.084 suites 1 1 n/a 0 0 00:06:37.084 tests 59 59 59 0 0 00:06:37.084 asserts 4599 4599 4599 0 n/a 00:06:37.084 00:06:37.084 Elapsed time = 1.671 seconds 00:06:37.084 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:37.084 00:06:37.084 00:06:37.084 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.084 http://cunit.sourceforge.net/ 00:06:37.084 00:06:37.084 00:06:37.084 Suite: nvme 00:06:37.084 Test: test_create_ctrlr ...passed 00:06:37.084 Test: test_reset_ctrlr ...[2024-07-25 13:49:25.487202] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:37.084 Test: test_failover_ctrlr ...passed 00:06:37.084 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-25 13:49:25.490227] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.490551] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.490822] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_pending_reset ...[2024-07-25 13:49:25.492409] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.492752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_attach_ctrlr ...[2024-07-25 13:49:25.494065] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:37.084 passed 00:06:37.084 Test: test_aer_cb ...passed 00:06:37.084 Test: test_submit_nvme_cmd ...passed 00:06:37.084 Test: test_add_remove_trid ...passed 00:06:37.084 Test: test_abort ...[2024-07-25 13:49:25.497852] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:37.084 passed 00:06:37.084 Test: test_get_io_qpair ...passed 00:06:37.084 Test: test_bdev_unregister ...passed 00:06:37.084 Test: test_compare_ns ...passed 00:06:37.084 Test: test_init_ana_log_page ...passed 00:06:37.084 Test: test_get_memory_domains ...passed 00:06:37.084 Test: test_reconnect_qpair ...[2024-07-25 13:49:25.500813] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_create_bdev_ctrlr ...[2024-07-25 13:49:25.501465] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:37.084 passed 00:06:37.084 Test: test_add_multi_ns_to_bdev ...[2024-07-25 13:49:25.502932] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:37.084 passed 00:06:37.084 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:37.084 Test: test_admin_path ...passed 00:06:37.084 Test: test_reset_bdev_ctrlr ...passed 00:06:37.084 Test: test_find_io_path ...passed 00:06:37.084 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:37.084 Test: test_retry_io_for_io_path_error ...passed 00:06:37.084 Test: test_retry_io_count ...passed 00:06:37.084 Test: test_concurrent_read_ana_log_page ...passed 00:06:37.084 Test: test_retry_io_for_ana_error ...passed 00:06:37.084 Test: test_check_io_error_resiliency_params ...[2024-07-25 13:49:25.510791] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:37.084 [2024-07-25 13:49:25.510903] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:37.084 [2024-07-25 13:49:25.510946] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:37.084 [2024-07-25 13:49:25.510988] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:37.084 [2024-07-25 13:49:25.511038] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:37.084 [2024-07-25 13:49:25.511109] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:37.084 [2024-07-25 13:49:25.511133] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:37.084 [2024-07-25 13:49:25.511187] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:37.084 [2024-07-25 13:49:25.511238] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:37.084 passed 00:06:37.084 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:37.084 Test: test_reconnect_ctrlr ...[2024-07-25 13:49:25.512228] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.512408] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.512729] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.512901] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.513084] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_retry_failover_ctrlr ...[2024-07-25 13:49:25.513537] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_fail_path ...[2024-07-25 13:49:25.514280] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.514491] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.514671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.514800] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 [2024-07-25 13:49:25.514975] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_nvme_ns_cmp ...passed 00:06:37.084 Test: test_ana_transition ...passed 00:06:37.084 Test: test_set_preferred_path ...passed 00:06:37.084 Test: test_find_next_io_path ...passed 00:06:37.084 Test: test_find_io_path_min_qd ...passed 00:06:37.084 Test: test_disable_auto_failback ...[2024-07-25 13:49:25.516972] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.084 passed 00:06:37.084 Test: test_set_multipath_policy ...passed 00:06:37.084 Test: test_uuid_generation ...passed 00:06:37.084 Test: test_retry_io_to_same_path ...passed 00:06:37.084 Test: test_race_between_reset_and_disconnected ...passed 00:06:37.084 Test: test_ctrlr_op_rpc ...passed 00:06:37.084 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:37.084 Test: test_disable_enable_ctrlr ...[2024-07-25 13:49:25.520970] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.085 [2024-07-25 13:49:25.521178] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:37.085 passed 00:06:37.085 Test: test_delete_ctrlr_done ...passed 00:06:37.085 Test: test_ns_remove_during_reset ...passed 00:06:37.085 Test: test_io_path_is_current ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 1 1 n/a 0 0 00:06:37.085 tests 49 49 49 0 0 00:06:37.085 asserts 3578 3578 3578 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.037 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 Test Options 00:06:37.085 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:37.085 00:06:37.085 Suite: raid 00:06:37.085 Test: test_create_raid ...passed 00:06:37.085 Test: test_create_raid_superblock ...passed 00:06:37.085 Test: test_delete_raid ...passed 00:06:37.085 Test: test_create_raid_invalid_args ...[2024-07-25 13:49:25.575855] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1508:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:37.085 [2024-07-25 13:49:25.576445] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1502:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:37.085 [2024-07-25 13:49:25.577146] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1492:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:37.085 [2024-07-25 13:49:25.577461] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3381:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:37.085 [2024-07-25 13:49:25.577560] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3565:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:37.085 [2024-07-25 13:49:25.578704] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3381:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:37.085 [2024-07-25 13:49:25.578784] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3565:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:37.085 passed 00:06:37.085 Test: test_delete_raid_invalid_args ...passed 00:06:37.085 Test: test_io_channel ...passed 00:06:37.085 Test: test_reset_io ...passed 00:06:37.085 Test: test_multi_raid ...passed 00:06:37.085 Test: test_io_type_supported ...passed 00:06:37.085 Test: test_raid_json_dump_info ...passed 00:06:37.085 Test: test_context_size ...passed 00:06:37.085 Test: test_raid_level_conversions ...passed 00:06:37.085 Test: test_raid_io_split ...passed 00:06:37.085 Test: test_raid_process ...passed 00:06:37.085 Test: test_raid_process_with_qos ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 1 1 n/a 0 0 00:06:37.085 tests 15 15 15 0 0 00:06:37.085 asserts 6602 6602 6602 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.032 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: raid_sb 00:06:37.085 Test: test_raid_bdev_write_superblock ...passed 00:06:37.085 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:37.085 Test: test_raid_bdev_parse_superblock ...[2024-07-25 13:49:25.641897] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:37.085 passed 00:06:37.085 Suite: raid_sb_md 00:06:37.085 Test: test_raid_bdev_write_superblock ...passed 00:06:37.085 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:37.085 Test: test_raid_bdev_parse_superblock ...passed[2024-07-25 13:49:25.642462] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:37.085 00:06:37.085 Suite: raid_sb_md_interleaved 00:06:37.085 Test: test_raid_bdev_write_superblock ...passed 00:06:37.085 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:37.085 Test: test_raid_bdev_parse_superblock ...[2024-07-25 13:49:25.642799] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:37.085 passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 3 3 n/a 0 0 00:06:37.085 tests 9 9 9 0 0 00:06:37.085 asserts 139 139 139 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.002 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: concat 00:06:37.085 Test: test_concat_start ...passed 00:06:37.085 Test: test_concat_rw ...passed 00:06:37.085 Test: test_concat_null_payload ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 1 1 n/a 0 0 00:06:37.085 tests 3 3 3 0 0 00:06:37.085 asserts 8460 8460 8460 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.009 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: raid0 00:06:37.085 Test: test_write_io ...passed 00:06:37.085 Test: test_read_io ...passed 00:06:37.085 Test: test_unmap_io ...passed 00:06:37.085 Test: test_io_failure ...passed 00:06:37.085 Suite: raid0_dif 00:06:37.085 Test: test_write_io ...passed 00:06:37.085 Test: test_read_io ...passed 00:06:37.085 Test: test_unmap_io ...passed 00:06:37.085 Test: test_io_failure ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 2 2 n/a 0 0 00:06:37.085 tests 8 8 8 0 0 00:06:37.085 asserts 368291 368291 368291 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.143 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: raid1 00:06:37.085 Test: test_raid1_start ...passed 00:06:37.085 Test: test_raid1_read_balancing ...passed 00:06:37.085 Test: test_raid1_write_error ...passed 00:06:37.085 Test: test_raid1_read_error ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 1 1 n/a 0 0 00:06:37.085 tests 4 4 4 0 0 00:06:37.085 asserts 4374 4374 4374 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.005 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: zone 00:06:37.085 Test: test_zone_get_operation ...passed 00:06:37.085 Test: test_bdev_zone_get_info ...passed 00:06:37.085 Test: test_bdev_zone_management ...passed 00:06:37.085 Test: test_bdev_zone_append ...passed 00:06:37.085 Test: test_bdev_zone_append_with_md ...passed 00:06:37.085 Test: test_bdev_zone_appendv ...passed 00:06:37.085 Test: test_bdev_zone_appendv_with_md ...passed 00:06:37.085 Test: test_bdev_io_get_append_location ...passed 00:06:37.085 00:06:37.085 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.085 suites 1 1 n/a 0 0 00:06:37.085 tests 8 8 8 0 0 00:06:37.085 asserts 94 94 94 0 n/a 00:06:37.085 00:06:37.085 Elapsed time = 0.000 seconds 00:06:37.085 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:37.085 00:06:37.085 00:06:37.085 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.085 http://cunit.sourceforge.net/ 00:06:37.085 00:06:37.085 00:06:37.085 Suite: gpt_parse 00:06:37.085 Test: test_parse_mbr_and_primary ...[2024-07-25 13:49:25.971558] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:37.085 [2024-07-25 13:49:25.971925] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:37.085 [2024-07-25 13:49:25.972018] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:37.085 [2024-07-25 13:49:25.972137] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:37.085 [2024-07-25 13:49:25.972205] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:37.086 [2024-07-25 13:49:25.972317] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:37.086 passed 00:06:37.086 Test: test_parse_secondary ...[2024-07-25 13:49:25.973126] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:37.086 [2024-07-25 13:49:25.973204] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:37.086 [2024-07-25 13:49:25.973247] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:37.086 [2024-07-25 13:49:25.973289] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:37.086 passed 00:06:37.086 Test: test_check_mbr ...[2024-07-25 13:49:25.974107] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:37.086 [2024-07-25 13:49:25.974177] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:37.086 passed 00:06:37.086 Test: test_read_header ...[2024-07-25 13:49:25.974245] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:37.086 [2024-07-25 13:49:25.974363] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:37.086 [2024-07-25 13:49:25.974465] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:37.086 [2024-07-25 13:49:25.974520] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:37.086 passed 00:06:37.086 Test: test_read_partitions ...[2024-07-25 13:49:25.974583] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:37.086 [2024-07-25 13:49:25.974637] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:37.086 [2024-07-25 13:49:25.974715] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:37.086 [2024-07-25 13:49:25.974793] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:37.086 [2024-07-25 13:49:25.974855] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:37.086 [2024-07-25 13:49:25.974896] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:37.086 [2024-07-25 13:49:25.975326] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:37.086 passed 00:06:37.086 00:06:37.086 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.086 suites 1 1 n/a 0 0 00:06:37.086 tests 5 5 5 0 0 00:06:37.086 asserts 33 33 33 0 n/a 00:06:37.086 00:06:37.086 Elapsed time = 0.005 seconds 00:06:37.086 13:49:25 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:37.086 00:06:37.086 00:06:37.086 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.086 http://cunit.sourceforge.net/ 00:06:37.086 00:06:37.086 00:06:37.086 Suite: bdev_part 00:06:37.086 Test: part_test ...[2024-07-25 13:49:26.016602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 229729dc-2ab2-5222-9083-42d190c814f2 already exists 00:06:37.086 [2024-07-25 13:49:26.016889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:229729dc-2ab2-5222-9083-42d190c814f2 alias for bdev test1 00:06:37.086 passed 00:06:37.086 Test: part_free_test ...passed 00:06:37.086 Test: part_get_io_channel_test ...passed 00:06:37.086 Test: part_construct_ext ...passed 00:06:37.086 00:06:37.086 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.086 suites 1 1 n/a 0 0 00:06:37.086 tests 4 4 4 0 0 00:06:37.086 asserts 48 48 48 0 n/a 00:06:37.086 00:06:37.086 Elapsed time = 0.055 seconds 00:06:37.086 13:49:26 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:37.086 00:06:37.086 00:06:37.086 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.086 http://cunit.sourceforge.net/ 00:06:37.086 00:06:37.086 00:06:37.086 Suite: scsi_nvme_suite 00:06:37.086 Test: scsi_nvme_translate_test ...passed 00:06:37.086 00:06:37.086 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.086 suites 1 1 n/a 0 0 00:06:37.086 tests 1 1 1 0 0 00:06:37.086 asserts 104 104 104 0 n/a 00:06:37.086 00:06:37.086 Elapsed time = 0.000 seconds 00:06:37.346 13:49:26 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:37.346 00:06:37.346 00:06:37.346 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.346 http://cunit.sourceforge.net/ 00:06:37.346 00:06:37.346 00:06:37.346 Suite: lvol 00:06:37.346 Test: ut_lvs_init ...[2024-07-25 13:49:26.144258] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:37.346 [2024-07-25 13:49:26.144898] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_init ...passed 00:06:37.346 Test: ut_lvol_snapshot ...passed 00:06:37.346 Test: ut_lvol_clone ...passed 00:06:37.346 Test: ut_lvs_destroy ...passed 00:06:37.346 Test: ut_lvs_unload ...passed 00:06:37.346 Test: ut_lvol_resize ...[2024-07-25 13:49:26.147188] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_set_read_only ...passed 00:06:37.346 Test: ut_lvol_hotremove ...passed 00:06:37.346 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:37.346 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:37.346 Test: ut_lvol_read_write ...passed 00:06:37.346 Test: ut_vbdev_lvol_submit_request ...passed 00:06:37.346 Test: ut_lvol_examine_config ...passed 00:06:37.346 Test: ut_lvol_examine_disk ...[2024-07-25 13:49:26.148018] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_rename ...[2024-07-25 13:49:26.149225] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:37.346 [2024-07-25 13:49:26.149367] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:37.346 passed 00:06:37.346 Test: ut_bdev_finish ...passed 00:06:37.346 Test: ut_lvs_rename ...passed 00:06:37.346 Test: ut_lvol_seek ...passed 00:06:37.346 Test: ut_esnap_dev_create ...[2024-07-25 13:49:26.150206] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:37.346 [2024-07-25 13:49:26.150306] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-25 13:49:26.150348] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:37.346 [2024-07-25 13:49:26.150480] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:37.346 [2024-07-25 13:49:26.150530] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_shallow_copy ...[2024-07-25 13:49:26.150950] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:37.346 [2024-07-25 13:49:26.151004] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:37.346 passed 00:06:37.346 Test: ut_lvol_set_external_parent ...passed 00:06:37.346 00:06:37.346 [2024-07-25 13:49:26.151178] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:37.346 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.346 suites 1 1 n/a 0 0 00:06:37.346 tests 23 23 23 0 0 00:06:37.346 asserts 770 770 770 0 n/a 00:06:37.346 00:06:37.346 Elapsed time = 0.007 seconds 00:06:37.346 13:49:26 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:37.346 00:06:37.346 00:06:37.346 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.346 http://cunit.sourceforge.net/ 00:06:37.346 00:06:37.346 00:06:37.346 Suite: zone_block 00:06:37.346 Test: test_zone_block_create ...passed 00:06:37.346 Test: test_zone_block_create_invalid ...[2024-07-25 13:49:26.195517] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:37.346 [2024-07-25 13:49:26.195931] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 13:49:26.196257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:37.346 [2024-07-25 13:49:26.196441] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 13:49:26.196756] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:37.346 [2024-07-25 13:49:26.196823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-25 13:49:26.196997] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:37.346 [2024-07-25 13:49:26.197328] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:37.346 Test: test_get_zone_info ...[2024-07-25 13:49:26.198140] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.198280] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.198333] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 passed 00:06:37.346 Test: test_supported_io_types ...passed 00:06:37.346 Test: test_reset_zone ...[2024-07-25 13:49:26.199682] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.199749] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 passed 00:06:37.346 Test: test_open_zone ...[2024-07-25 13:49:26.200579] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.201385] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.201465] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 passed 00:06:37.346 Test: test_zone_write ...[2024-07-25 13:49:26.202429] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:37.346 [2024-07-25 13:49:26.202562] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.202833] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:37.346 [2024-07-25 13:49:26.202901] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.208146] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:37.346 [2024-07-25 13:49:26.208211] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.208294] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:37.346 [2024-07-25 13:49:26.208319] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.346 [2024-07-25 13:49:26.213348] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:37.347 [2024-07-25 13:49:26.213429] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 passed 00:06:37.347 Test: test_zone_read ...[2024-07-25 13:49:26.214386] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:37.347 [2024-07-25 13:49:26.214470] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.214570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:37.347 [2024-07-25 13:49:26.214600] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.215303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:37.347 [2024-07-25 13:49:26.215370] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 passed 00:06:37.347 Test: test_close_zone ...[2024-07-25 13:49:26.216262] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.216375] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.216754] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.216816] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 passed 00:06:37.347 Test: test_finish_zone ...[2024-07-25 13:49:26.217847] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.217941] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 passed 00:06:37.347 Test: test_append_zone ...[2024-07-25 13:49:26.218647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:37.347 [2024-07-25 13:49:26.218699] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.218755] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:37.347 [2024-07-25 13:49:26.218874] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 [2024-07-25 13:49:26.228715] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:37.347 [2024-07-25 13:49:26.228802] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:37.347 passed 00:06:37.347 00:06:37.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.347 suites 1 1 n/a 0 0 00:06:37.347 tests 11 11 11 0 0 00:06:37.347 asserts 3437 3437 3437 0 n/a 00:06:37.347 00:06:37.347 Elapsed time = 0.035 seconds 00:06:37.347 13:49:26 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:37.347 00:06:37.347 00:06:37.347 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.347 http://cunit.sourceforge.net/ 00:06:37.347 00:06:37.347 00:06:37.347 Suite: bdev 00:06:37.347 Test: basic ...[2024-07-25 13:49:26.327467] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55a924017b41): Operation not permitted (rc=-1) 00:06:37.347 [2024-07-25 13:49:26.327814] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55a924017b00): Operation not permitted (rc=-1) 00:06:37.347 [2024-07-25 13:49:26.327899] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55a924017b41): Operation not permitted (rc=-1) 00:06:37.347 passed 00:06:37.606 Test: unregister_and_close ...passed 00:06:37.606 Test: unregister_and_close_different_threads ...passed 00:06:37.606 Test: basic_qos ...passed 00:06:37.606 Test: put_channel_during_reset ...passed 00:06:37.606 Test: aborted_reset ...passed 00:06:37.606 Test: aborted_reset_no_outstanding_io ...passed 00:06:37.865 Test: io_during_reset ...passed 00:06:37.865 Test: reset_completions ...passed 00:06:37.865 Test: io_during_qos_queue ...passed 00:06:37.865 Test: io_during_qos_reset ...passed 00:06:37.865 Test: enomem ...passed 00:06:37.865 Test: enomem_multi_bdev ...passed 00:06:38.124 Test: enomem_multi_bdev_unregister ...passed 00:06:38.124 Test: enomem_multi_io_target ...passed 00:06:38.124 Test: qos_dynamic_enable ...passed 00:06:38.124 Test: bdev_histograms_mt ...passed 00:06:38.124 Test: bdev_set_io_timeout_mt ...[2024-07-25 13:49:27.086412] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:38.124 passed 00:06:38.124 Test: lock_lba_range_then_submit_io ...[2024-07-25 13:49:27.106827] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x55a924017ac0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:38.124 passed 00:06:38.383 Test: unregister_during_reset ...passed 00:06:38.383 Test: event_notify_and_close ...passed 00:06:38.383 Test: unregister_and_qos_poller ...passed 00:06:38.383 Suite: bdev_wrong_thread 00:06:38.383 Test: spdk_bdev_register_wt ...[2024-07-25 13:49:27.256592] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8535:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:38.383 passed 00:06:38.383 Test: spdk_bdev_examine_wt ...[2024-07-25 13:49:27.256973] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:38.383 passed 00:06:38.383 00:06:38.383 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.383 suites 2 2 n/a 0 0 00:06:38.383 tests 24 24 24 0 0 00:06:38.383 asserts 621 621 621 0 n/a 00:06:38.383 00:06:38.383 Elapsed time = 0.959 seconds 00:06:38.383 00:06:38.383 real 0m3.551s 00:06:38.383 user 0m1.703s 00:06:38.383 sys 0m1.854s 00:06:38.383 13:49:27 unittest.unittest_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.383 13:49:27 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 ************************************ 00:06:38.383 END TEST unittest_bdev 00:06:38.383 ************************************ 00:06:38.383 13:49:27 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:38.383 13:49:27 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:38.383 13:49:27 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:38.383 13:49:27 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:38.383 13:49:27 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:38.383 13:49:27 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.383 13:49:27 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.383 13:49:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:38.383 ************************************ 00:06:38.383 START TEST unittest_bdev_raid5f 00:06:38.383 ************************************ 00:06:38.383 13:49:27 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:38.383 00:06:38.383 00:06:38.383 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.383 http://cunit.sourceforge.net/ 00:06:38.383 00:06:38.383 00:06:38.383 Suite: raid5f 00:06:38.383 Test: test_raid5f_start ...passed 00:06:39.320 Test: test_raid5f_submit_read_request ...passed 00:06:39.320 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:44.590 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:11.123 Test: test_raid5f_chunk_write_error ...passed 00:07:23.335 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:27.520 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:14.181 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:14.181 00:08:14.181 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.181 suites 1 1 n/a 0 0 00:08:14.181 tests 8 8 8 0 0 00:08:14.181 asserts 518158 518158 518158 0 n/a 00:08:14.181 00:08:14.181 Elapsed time = 90.558 seconds 00:08:14.181 00:08:14.181 real 1m30.656s 00:08:14.181 user 1m26.056s 00:08:14.181 sys 0m4.577s 00:08:14.181 13:50:58 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.181 13:50:58 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:08:14.181 ************************************ 00:08:14.181 END TEST unittest_bdev_raid5f 00:08:14.181 ************************************ 00:08:14.181 13:50:58 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:08:14.181 13:50:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.181 13:50:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.181 13:50:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:14.181 ************************************ 00:08:14.181 START TEST unittest_blob_blobfs 00:08:14.181 ************************************ 00:08:14.181 13:50:58 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1125 -- # unittest_blob 00:08:14.181 13:50:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:14.181 13:50:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:14.181 00:08:14.181 00:08:14.181 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.181 http://cunit.sourceforge.net/ 00:08:14.181 00:08:14.181 00:08:14.181 Suite: blob_nocopy_noextent 00:08:14.181 Test: blob_init ...[2024-07-25 13:50:58.096830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:14.181 passed 00:08:14.181 Test: blob_thin_provision ...passed 00:08:14.181 Test: blob_read_only ...passed 00:08:14.181 Test: bs_load ...[2024-07-25 13:50:58.203051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:14.181 passed 00:08:14.181 Test: bs_load_custom_cluster_size ...passed 00:08:14.181 Test: bs_load_after_failed_grow ...passed 00:08:14.181 Test: bs_cluster_sz ...[2024-07-25 13:50:58.239702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:14.181 [2024-07-25 13:50:58.240504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:14.181 [2024-07-25 13:50:58.240885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:14.181 passed 00:08:14.181 Test: bs_resize_md ...passed 00:08:14.181 Test: bs_destroy ...passed 00:08:14.181 Test: bs_type ...passed 00:08:14.181 Test: bs_super_block ...passed 00:08:14.181 Test: bs_test_recover_cluster_count ...passed 00:08:14.181 Test: bs_grow_live ...passed 00:08:14.181 Test: bs_grow_live_no_space ...passed 00:08:14.181 Test: bs_test_grow ...passed 00:08:14.181 Test: blob_serialize_test ...passed 00:08:14.181 Test: super_block_crc ...passed 00:08:14.181 Test: blob_thin_prov_write_count_io ...passed 00:08:14.181 Test: blob_thin_prov_unmap_cluster ...passed 00:08:14.181 Test: bs_load_iter_test ...passed 00:08:14.181 Test: blob_relations ...[2024-07-25 13:50:58.484167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.484603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.485807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.486043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 passed 00:08:14.181 Test: blob_relations2 ...[2024-07-25 13:50:58.503431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.503770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.503950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.504119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.505873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.506075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.506676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.181 [2024-07-25 13:50:58.506848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 passed 00:08:14.181 Test: blob_relations3 ...passed 00:08:14.181 Test: blobstore_clean_power_failure ...passed 00:08:14.181 Test: blob_delete_snapshot_power_failure ...[2024-07-25 13:50:58.700150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:14.181 [2024-07-25 13:50:58.715240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:14.181 [2024-07-25 13:50:58.715578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:14.181 [2024-07-25 13:50:58.715775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.730789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:14.181 [2024-07-25 13:50:58.731083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:14.181 [2024-07-25 13:50:58.731280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:14.181 [2024-07-25 13:50:58.731476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.746624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:14.181 [2024-07-25 13:50:58.747049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.762359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:14.181 [2024-07-25 13:50:58.762757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 [2024-07-25 13:50:58.777976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:14.181 [2024-07-25 13:50:58.778319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.181 passed 00:08:14.181 Test: blob_create_snapshot_power_failure ...[2024-07-25 13:50:58.822779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:14.181 [2024-07-25 13:50:58.851705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:14.181 [2024-07-25 13:50:58.866678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:14.181 passed 00:08:14.181 Test: blob_io_unit ...passed 00:08:14.181 Test: blob_io_unit_compatibility ...passed 00:08:14.181 Test: blob_ext_md_pages ...passed 00:08:14.181 Test: blob_esnap_io_4096_4096 ...passed 00:08:14.181 Test: blob_esnap_io_512_512 ...passed 00:08:14.182 Test: blob_esnap_io_4096_512 ...passed 00:08:14.182 Test: blob_esnap_io_512_4096 ...passed 00:08:14.182 Test: blob_esnap_clone_resize ...passed 00:08:14.182 Suite: blob_bs_nocopy_noextent 00:08:14.182 Test: blob_open ...passed 00:08:14.182 Test: blob_create ...[2024-07-25 13:50:59.200297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:14.182 passed 00:08:14.182 Test: blob_create_loop ...passed 00:08:14.182 Test: blob_create_fail ...[2024-07-25 13:50:59.316532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:14.182 passed 00:08:14.182 Test: blob_create_internal ...passed 00:08:14.182 Test: blob_create_zero_extent ...passed 00:08:14.182 Test: blob_snapshot ...passed 00:08:14.182 Test: blob_clone ...passed 00:08:14.182 Test: blob_inflate ...[2024-07-25 13:50:59.541160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:14.182 passed 00:08:14.182 Test: blob_delete ...passed 00:08:14.182 Test: blob_resize_test ...[2024-07-25 13:50:59.621898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:14.182 passed 00:08:14.182 Test: blob_resize_thin_test ...passed 00:08:14.182 Test: channel_ops ...passed 00:08:14.182 Test: blob_super ...passed 00:08:14.182 Test: blob_rw_verify_iov ...passed 00:08:14.182 Test: blob_unmap ...passed 00:08:14.182 Test: blob_iter ...passed 00:08:14.182 Test: blob_parse_md ...passed 00:08:14.182 Test: bs_load_pending_removal ...passed 00:08:14.182 Test: bs_unload ...[2024-07-25 13:50:59.998950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:14.182 passed 00:08:14.182 Test: bs_usable_clusters ...passed 00:08:14.182 Test: blob_crc ...[2024-07-25 13:51:00.081076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:14.182 [2024-07-25 13:51:00.081491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:14.182 passed 00:08:14.182 Test: blob_flags ...passed 00:08:14.182 Test: bs_version ...passed 00:08:14.182 Test: blob_set_xattrs_test ...[2024-07-25 13:51:00.204013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:14.182 [2024-07-25 13:51:00.204568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:14.182 passed 00:08:14.182 Test: blob_thin_prov_alloc ...passed 00:08:14.182 Test: blob_insert_cluster_msg_test ...passed 00:08:14.182 Test: blob_thin_prov_rw ...passed 00:08:14.182 Test: blob_thin_prov_rle ...passed 00:08:14.182 Test: blob_thin_prov_rw_iov ...passed 00:08:14.182 Test: blob_snapshot_rw ...passed 00:08:14.182 Test: blob_snapshot_rw_iov ...passed 00:08:14.182 Test: blob_inflate_rw ...passed 00:08:14.182 Test: blob_snapshot_freeze_io ...passed 00:08:14.182 Test: blob_operation_split_rw ...passed 00:08:14.182 Test: blob_operation_split_rw_iov ...passed 00:08:14.182 Test: blob_simultaneous_operations ...[2024-07-25 13:51:01.322673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:14.182 [2024-07-25 13:51:01.323265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.182 [2024-07-25 13:51:01.324563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:14.182 [2024-07-25 13:51:01.324735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.182 [2024-07-25 13:51:01.336345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:14.182 [2024-07-25 13:51:01.336633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.182 [2024-07-25 13:51:01.336923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:14.182 [2024-07-25 13:51:01.337097] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.182 passed 00:08:14.182 Test: blob_persist_test ...passed 00:08:14.182 Test: blob_decouple_snapshot ...passed 00:08:14.182 Test: blob_seek_io_unit ...passed 00:08:14.182 Test: blob_nested_freezes ...passed 00:08:14.182 Test: blob_clone_resize ...passed 00:08:14.182 Test: blob_shallow_copy ...[2024-07-25 13:51:01.656516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:14.182 [2024-07-25 13:51:01.657181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:14.182 [2024-07-25 13:51:01.657579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:14.182 passed 00:08:14.182 Suite: blob_blob_nocopy_noextent 00:08:14.182 Test: blob_write ...passed 00:08:14.182 Test: blob_read ...passed 00:08:14.182 Test: blob_rw_verify ...passed 00:08:14.182 Test: blob_rw_verify_iov_nomem ...passed 00:08:14.182 Test: blob_rw_iov_read_only ...passed 00:08:14.182 Test: blob_xattr ...passed 00:08:14.182 Test: blob_dirty_shutdown ...passed 00:08:14.182 Test: blob_is_degraded ...passed 00:08:14.182 Suite: blob_esnap_bs_nocopy_noextent 00:08:14.182 Test: blob_esnap_create ...passed 00:08:14.182 Test: blob_esnap_thread_add_remove ...passed 00:08:14.182 Test: blob_esnap_clone_snapshot ...passed 00:08:14.182 Test: blob_esnap_clone_inflate ...passed 00:08:14.182 Test: blob_esnap_clone_decouple ...passed 00:08:14.182 Test: blob_esnap_clone_reload ...passed 00:08:14.182 Test: blob_esnap_hotplug ...passed 00:08:14.182 Test: blob_set_parent ...[2024-07-25 13:51:02.336151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:14.182 [2024-07-25 13:51:02.336763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:14.182 [2024-07-25 13:51:02.337104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:14.182 [2024-07-25 13:51:02.337292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:14.182 [2024-07-25 13:51:02.338069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:14.182 passed 00:08:14.182 Test: blob_set_external_parent ...[2024-07-25 13:51:02.380236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:14.182 [2024-07-25 13:51:02.380595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:14.182 [2024-07-25 13:51:02.380782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:14.182 [2024-07-25 13:51:02.381433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:14.182 passed 00:08:14.182 Suite: blob_nocopy_extent 00:08:14.182 Test: blob_init ...[2024-07-25 13:51:02.395835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:14.182 passed 00:08:14.182 Test: blob_thin_provision ...passed 00:08:14.182 Test: blob_read_only ...passed 00:08:14.182 Test: bs_load ...[2024-07-25 13:51:02.453517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:14.182 passed 00:08:14.182 Test: bs_load_custom_cluster_size ...passed 00:08:14.182 Test: bs_load_after_failed_grow ...passed 00:08:14.182 Test: bs_cluster_sz ...[2024-07-25 13:51:02.485993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:14.182 [2024-07-25 13:51:02.486566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:14.182 [2024-07-25 13:51:02.486754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:14.182 passed 00:08:14.183 Test: bs_resize_md ...passed 00:08:14.183 Test: bs_destroy ...passed 00:08:14.183 Test: bs_type ...passed 00:08:14.183 Test: bs_super_block ...passed 00:08:14.183 Test: bs_test_recover_cluster_count ...passed 00:08:14.183 Test: bs_grow_live ...passed 00:08:14.183 Test: bs_grow_live_no_space ...passed 00:08:14.183 Test: bs_test_grow ...passed 00:08:14.183 Test: blob_serialize_test ...passed 00:08:14.183 Test: super_block_crc ...passed 00:08:14.183 Test: blob_thin_prov_write_count_io ...passed 00:08:14.183 Test: blob_thin_prov_unmap_cluster ...passed 00:08:14.183 Test: bs_load_iter_test ...passed 00:08:14.183 Test: blob_relations ...[2024-07-25 13:51:02.705473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.705936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.707082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.707249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 passed 00:08:14.183 Test: blob_relations2 ...[2024-07-25 13:51:02.724247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.724569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.724759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.724963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.726649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.726829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.727410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:14.183 [2024-07-25 13:51:02.727581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 passed 00:08:14.183 Test: blob_relations3 ...passed 00:08:14.183 Test: blobstore_clean_power_failure ...passed 00:08:14.183 Test: blob_delete_snapshot_power_failure ...[2024-07-25 13:51:02.922282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:14.183 [2024-07-25 13:51:02.937649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:14.183 [2024-07-25 13:51:02.955534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:14.183 [2024-07-25 13:51:02.955680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:14.183 [2024-07-25 13:51:02.955745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.973280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:14.183 [2024-07-25 13:51:02.973481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:14.183 [2024-07-25 13:51:02.973540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:14.183 [2024-07-25 13:51:02.973612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:02.991139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:14.183 [2024-07-25 13:51:02.991271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:14.183 [2024-07-25 13:51:02.991324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:14.183 [2024-07-25 13:51:02.991392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:03.009996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:14.183 [2024-07-25 13:51:03.010166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:03.027827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:14.183 [2024-07-25 13:51:03.028041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 [2024-07-25 13:51:03.046365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:14.183 [2024-07-25 13:51:03.046546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:14.183 passed 00:08:14.183 Test: blob_create_snapshot_power_failure ...[2024-07-25 13:51:03.095771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:14.183 [2024-07-25 13:51:03.110941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:14.183 [2024-07-25 13:51:03.140762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:14.183 [2024-07-25 13:51:03.156355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:14.183 passed 00:08:14.442 Test: blob_io_unit ...passed 00:08:14.442 Test: blob_io_unit_compatibility ...passed 00:08:14.442 Test: blob_ext_md_pages ...passed 00:08:14.442 Test: blob_esnap_io_4096_4096 ...passed 00:08:14.442 Test: blob_esnap_io_512_512 ...passed 00:08:14.442 Test: blob_esnap_io_4096_512 ...passed 00:08:14.442 Test: blob_esnap_io_512_4096 ...passed 00:08:14.442 Test: blob_esnap_clone_resize ...passed 00:08:14.442 Suite: blob_bs_nocopy_extent 00:08:14.442 Test: blob_open ...passed 00:08:14.699 Test: blob_create ...[2024-07-25 13:51:03.496861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:14.699 passed 00:08:14.699 Test: blob_create_loop ...passed 00:08:14.699 Test: blob_create_fail ...[2024-07-25 13:51:03.622551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:14.699 passed 00:08:14.699 Test: blob_create_internal ...passed 00:08:14.699 Test: blob_create_zero_extent ...passed 00:08:14.957 Test: blob_snapshot ...passed 00:08:14.957 Test: blob_clone ...passed 00:08:14.957 Test: blob_inflate ...[2024-07-25 13:51:03.849912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:14.957 passed 00:08:14.957 Test: blob_delete ...passed 00:08:14.957 Test: blob_resize_test ...[2024-07-25 13:51:03.932590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:14.957 passed 00:08:15.215 Test: blob_resize_thin_test ...passed 00:08:15.215 Test: channel_ops ...passed 00:08:15.215 Test: blob_super ...passed 00:08:15.215 Test: blob_rw_verify_iov ...passed 00:08:15.215 Test: blob_unmap ...passed 00:08:15.215 Test: blob_iter ...passed 00:08:15.215 Test: blob_parse_md ...passed 00:08:15.473 Test: bs_load_pending_removal ...passed 00:08:15.473 Test: bs_unload ...[2024-07-25 13:51:04.315193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:15.473 passed 00:08:15.473 Test: bs_usable_clusters ...passed 00:08:15.473 Test: blob_crc ...[2024-07-25 13:51:04.398513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:15.473 [2024-07-25 13:51:04.398655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:15.473 passed 00:08:15.473 Test: blob_flags ...passed 00:08:15.473 Test: bs_version ...passed 00:08:15.732 Test: blob_set_xattrs_test ...[2024-07-25 13:51:04.524967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.732 [2024-07-25 13:51:04.525100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.732 passed 00:08:15.732 Test: blob_thin_prov_alloc ...passed 00:08:15.732 Test: blob_insert_cluster_msg_test ...passed 00:08:15.990 Test: blob_thin_prov_rw ...passed 00:08:15.991 Test: blob_thin_prov_rle ...passed 00:08:15.991 Test: blob_thin_prov_rw_iov ...passed 00:08:15.991 Test: blob_snapshot_rw ...passed 00:08:15.991 Test: blob_snapshot_rw_iov ...passed 00:08:16.248 Test: blob_inflate_rw ...passed 00:08:16.249 Test: blob_snapshot_freeze_io ...passed 00:08:16.506 Test: blob_operation_split_rw ...passed 00:08:16.765 Test: blob_operation_split_rw_iov ...passed 00:08:16.765 Test: blob_simultaneous_operations ...[2024-07-25 13:51:05.648207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.765 [2024-07-25 13:51:05.648335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.765 [2024-07-25 13:51:05.649651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.765 [2024-07-25 13:51:05.649713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.765 [2024-07-25 13:51:05.661645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.765 [2024-07-25 13:51:05.661758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.765 [2024-07-25 13:51:05.661926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.765 [2024-07-25 13:51:05.661953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.765 passed 00:08:16.765 Test: blob_persist_test ...passed 00:08:17.023 Test: blob_decouple_snapshot ...passed 00:08:17.023 Test: blob_seek_io_unit ...passed 00:08:17.023 Test: blob_nested_freezes ...passed 00:08:17.023 Test: blob_clone_resize ...passed 00:08:17.023 Test: blob_shallow_copy ...[2024-07-25 13:51:05.996512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:17.023 [2024-07-25 13:51:05.996879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:17.023 [2024-07-25 13:51:05.997366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:17.023 passed 00:08:17.023 Suite: blob_blob_nocopy_extent 00:08:17.023 Test: blob_write ...passed 00:08:17.282 Test: blob_read ...passed 00:08:17.282 Test: blob_rw_verify ...passed 00:08:17.282 Test: blob_rw_verify_iov_nomem ...passed 00:08:17.282 Test: blob_rw_iov_read_only ...passed 00:08:17.282 Test: blob_xattr ...passed 00:08:17.540 Test: blob_dirty_shutdown ...passed 00:08:17.540 Test: blob_is_degraded ...passed 00:08:17.540 Suite: blob_esnap_bs_nocopy_extent 00:08:17.540 Test: blob_esnap_create ...passed 00:08:17.540 Test: blob_esnap_thread_add_remove ...passed 00:08:17.540 Test: blob_esnap_clone_snapshot ...passed 00:08:17.540 Test: blob_esnap_clone_inflate ...passed 00:08:17.540 Test: blob_esnap_clone_decouple ...passed 00:08:17.799 Test: blob_esnap_clone_reload ...passed 00:08:17.799 Test: blob_esnap_hotplug ...passed 00:08:17.799 Test: blob_set_parent ...[2024-07-25 13:51:06.682218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:17.799 [2024-07-25 13:51:06.682345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:17.799 [2024-07-25 13:51:06.682482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:17.799 [2024-07-25 13:51:06.682527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:17.799 [2024-07-25 13:51:06.683315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:17.799 passed 00:08:17.799 Test: blob_set_external_parent ...[2024-07-25 13:51:06.725979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:17.799 [2024-07-25 13:51:06.726090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:17.799 [2024-07-25 13:51:06.726121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:17.799 [2024-07-25 13:51:06.726761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:17.799 passed 00:08:17.799 Suite: blob_copy_noextent 00:08:17.799 Test: blob_init ...[2024-07-25 13:51:06.741322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:17.799 passed 00:08:17.799 Test: blob_thin_provision ...passed 00:08:17.799 Test: blob_read_only ...passed 00:08:17.799 Test: bs_load ...[2024-07-25 13:51:06.798808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:17.799 passed 00:08:17.799 Test: bs_load_custom_cluster_size ...passed 00:08:17.799 Test: bs_load_after_failed_grow ...passed 00:08:17.799 Test: bs_cluster_sz ...[2024-07-25 13:51:06.828917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:17.799 [2024-07-25 13:51:06.829154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:17.799 [2024-07-25 13:51:06.829204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:18.058 passed 00:08:18.058 Test: bs_resize_md ...passed 00:08:18.058 Test: bs_destroy ...passed 00:08:18.058 Test: bs_type ...passed 00:08:18.058 Test: bs_super_block ...passed 00:08:18.058 Test: bs_test_recover_cluster_count ...passed 00:08:18.058 Test: bs_grow_live ...passed 00:08:18.058 Test: bs_grow_live_no_space ...passed 00:08:18.058 Test: bs_test_grow ...passed 00:08:18.058 Test: blob_serialize_test ...passed 00:08:18.058 Test: super_block_crc ...passed 00:08:18.058 Test: blob_thin_prov_write_count_io ...passed 00:08:18.058 Test: blob_thin_prov_unmap_cluster ...passed 00:08:18.058 Test: bs_load_iter_test ...passed 00:08:18.058 Test: blob_relations ...[2024-07-25 13:51:07.061896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.062006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 [2024-07-25 13:51:07.062819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.062873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 passed 00:08:18.058 Test: blob_relations2 ...[2024-07-25 13:51:07.080129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.080212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 [2024-07-25 13:51:07.080249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.080267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 [2024-07-25 13:51:07.081554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.081620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 [2024-07-25 13:51:07.082124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:18.058 [2024-07-25 13:51:07.082178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.058 passed 00:08:18.317 Test: blob_relations3 ...passed 00:08:18.317 Test: blobstore_clean_power_failure ...passed 00:08:18.317 Test: blob_delete_snapshot_power_failure ...[2024-07-25 13:51:07.285125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:18.317 [2024-07-25 13:51:07.300159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:18.317 [2024-07-25 13:51:07.300288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:18.317 [2024-07-25 13:51:07.300320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.317 [2024-07-25 13:51:07.315711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:18.317 [2024-07-25 13:51:07.315818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:18.317 [2024-07-25 13:51:07.315846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:18.317 [2024-07-25 13:51:07.315885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.317 [2024-07-25 13:51:07.330943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:18.317 [2024-07-25 13:51:07.331070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.317 [2024-07-25 13:51:07.346107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:18.317 [2024-07-25 13:51:07.346255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.575 [2024-07-25 13:51:07.361965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:18.575 [2024-07-25 13:51:07.362087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:18.575 passed 00:08:18.575 Test: blob_create_snapshot_power_failure ...[2024-07-25 13:51:07.407908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:18.575 [2024-07-25 13:51:07.437284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:18.575 [2024-07-25 13:51:07.452960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:18.575 passed 00:08:18.575 Test: blob_io_unit ...passed 00:08:18.575 Test: blob_io_unit_compatibility ...passed 00:08:18.575 Test: blob_ext_md_pages ...passed 00:08:18.575 Test: blob_esnap_io_4096_4096 ...passed 00:08:18.833 Test: blob_esnap_io_512_512 ...passed 00:08:18.833 Test: blob_esnap_io_4096_512 ...passed 00:08:18.833 Test: blob_esnap_io_512_4096 ...passed 00:08:18.833 Test: blob_esnap_clone_resize ...passed 00:08:18.833 Suite: blob_bs_copy_noextent 00:08:18.833 Test: blob_open ...passed 00:08:18.833 Test: blob_create ...[2024-07-25 13:51:07.789762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:18.833 passed 00:08:19.091 Test: blob_create_loop ...passed 00:08:19.091 Test: blob_create_fail ...[2024-07-25 13:51:07.904497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:19.091 passed 00:08:19.091 Test: blob_create_internal ...passed 00:08:19.091 Test: blob_create_zero_extent ...passed 00:08:19.091 Test: blob_snapshot ...passed 00:08:19.091 Test: blob_clone ...passed 00:08:19.091 Test: blob_inflate ...[2024-07-25 13:51:08.122890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:19.349 passed 00:08:19.349 Test: blob_delete ...passed 00:08:19.349 Test: blob_resize_test ...[2024-07-25 13:51:08.207934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:19.349 passed 00:08:19.349 Test: blob_resize_thin_test ...passed 00:08:19.350 Test: channel_ops ...passed 00:08:19.350 Test: blob_super ...passed 00:08:19.608 Test: blob_rw_verify_iov ...passed 00:08:19.608 Test: blob_unmap ...passed 00:08:19.608 Test: blob_iter ...passed 00:08:19.608 Test: blob_parse_md ...passed 00:08:19.608 Test: bs_load_pending_removal ...passed 00:08:19.608 Test: bs_unload ...[2024-07-25 13:51:08.589332] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:19.608 passed 00:08:19.608 Test: bs_usable_clusters ...passed 00:08:19.867 Test: blob_crc ...[2024-07-25 13:51:08.675541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:19.867 [2024-07-25 13:51:08.675693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:19.867 passed 00:08:19.867 Test: blob_flags ...passed 00:08:19.867 Test: bs_version ...passed 00:08:19.867 Test: blob_set_xattrs_test ...[2024-07-25 13:51:08.801959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:19.867 [2024-07-25 13:51:08.802116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:19.867 passed 00:08:20.125 Test: blob_thin_prov_alloc ...passed 00:08:20.125 Test: blob_insert_cluster_msg_test ...passed 00:08:20.125 Test: blob_thin_prov_rw ...passed 00:08:20.125 Test: blob_thin_prov_rle ...passed 00:08:20.125 Test: blob_thin_prov_rw_iov ...passed 00:08:20.384 Test: blob_snapshot_rw ...passed 00:08:20.384 Test: blob_snapshot_rw_iov ...passed 00:08:20.642 Test: blob_inflate_rw ...passed 00:08:20.642 Test: blob_snapshot_freeze_io ...passed 00:08:20.901 Test: blob_operation_split_rw ...passed 00:08:20.901 Test: blob_operation_split_rw_iov ...passed 00:08:20.901 Test: blob_simultaneous_operations ...[2024-07-25 13:51:09.924711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:20.901 [2024-07-25 13:51:09.924840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:20.901 [2024-07-25 13:51:09.925550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:20.901 [2024-07-25 13:51:09.925603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:20.901 [2024-07-25 13:51:09.929418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:20.901 [2024-07-25 13:51:09.929490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:20.901 [2024-07-25 13:51:09.929605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:20.901 [2024-07-25 13:51:09.929629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:21.159 passed 00:08:21.159 Test: blob_persist_test ...passed 00:08:21.159 Test: blob_decouple_snapshot ...passed 00:08:21.159 Test: blob_seek_io_unit ...passed 00:08:21.159 Test: blob_nested_freezes ...passed 00:08:21.417 Test: blob_clone_resize ...passed 00:08:21.417 Test: blob_shallow_copy ...[2024-07-25 13:51:10.236435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:21.417 [2024-07-25 13:51:10.236804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:21.417 [2024-07-25 13:51:10.237406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:21.417 passed 00:08:21.417 Suite: blob_blob_copy_noextent 00:08:21.417 Test: blob_write ...passed 00:08:21.417 Test: blob_read ...passed 00:08:21.417 Test: blob_rw_verify ...passed 00:08:21.417 Test: blob_rw_verify_iov_nomem ...passed 00:08:21.676 Test: blob_rw_iov_read_only ...passed 00:08:21.676 Test: blob_xattr ...passed 00:08:21.676 Test: blob_dirty_shutdown ...passed 00:08:21.676 Test: blob_is_degraded ...passed 00:08:21.676 Suite: blob_esnap_bs_copy_noextent 00:08:21.676 Test: blob_esnap_create ...passed 00:08:21.676 Test: blob_esnap_thread_add_remove ...passed 00:08:21.935 Test: blob_esnap_clone_snapshot ...passed 00:08:21.935 Test: blob_esnap_clone_inflate ...passed 00:08:21.935 Test: blob_esnap_clone_decouple ...passed 00:08:21.935 Test: blob_esnap_clone_reload ...passed 00:08:21.935 Test: blob_esnap_hotplug ...passed 00:08:21.935 Test: blob_set_parent ...[2024-07-25 13:51:10.951454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:21.935 [2024-07-25 13:51:10.951571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:21.935 [2024-07-25 13:51:10.951684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:21.935 [2024-07-25 13:51:10.951734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:21.935 [2024-07-25 13:51:10.952434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:21.935 passed 00:08:22.193 Test: blob_set_external_parent ...[2024-07-25 13:51:10.996325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:22.193 [2024-07-25 13:51:10.996451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:22.193 [2024-07-25 13:51:10.996483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:22.193 [2024-07-25 13:51:10.997095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:22.193 passed 00:08:22.193 Suite: blob_copy_extent 00:08:22.193 Test: blob_init ...[2024-07-25 13:51:11.011732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:22.193 passed 00:08:22.193 Test: blob_thin_provision ...passed 00:08:22.193 Test: blob_read_only ...passed 00:08:22.193 Test: bs_load ...[2024-07-25 13:51:11.072268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:22.193 passed 00:08:22.193 Test: bs_load_custom_cluster_size ...passed 00:08:22.193 Test: bs_load_after_failed_grow ...passed 00:08:22.193 Test: bs_cluster_sz ...[2024-07-25 13:51:11.105231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:22.193 [2024-07-25 13:51:11.105461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:22.193 [2024-07-25 13:51:11.105510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:22.193 passed 00:08:22.193 Test: bs_resize_md ...passed 00:08:22.193 Test: bs_destroy ...passed 00:08:22.193 Test: bs_type ...passed 00:08:22.193 Test: bs_super_block ...passed 00:08:22.193 Test: bs_test_recover_cluster_count ...passed 00:08:22.193 Test: bs_grow_live ...passed 00:08:22.193 Test: bs_grow_live_no_space ...passed 00:08:22.193 Test: bs_test_grow ...passed 00:08:22.193 Test: blob_serialize_test ...passed 00:08:22.452 Test: super_block_crc ...passed 00:08:22.452 Test: blob_thin_prov_write_count_io ...passed 00:08:22.452 Test: blob_thin_prov_unmap_cluster ...passed 00:08:22.452 Test: bs_load_iter_test ...passed 00:08:22.452 Test: blob_relations ...[2024-07-25 13:51:11.327723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.327887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 [2024-07-25 13:51:11.328768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.328836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 passed 00:08:22.452 Test: blob_relations2 ...[2024-07-25 13:51:11.346274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.346382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 [2024-07-25 13:51:11.346426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.346447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 [2024-07-25 13:51:11.348190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.348289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 [2024-07-25 13:51:11.349176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:22.452 [2024-07-25 13:51:11.349277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.452 passed 00:08:22.452 Test: blob_relations3 ...passed 00:08:22.743 Test: blobstore_clean_power_failure ...passed 00:08:22.743 Test: blob_delete_snapshot_power_failure ...[2024-07-25 13:51:11.550582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:22.743 [2024-07-25 13:51:11.565996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:22.743 [2024-07-25 13:51:11.582567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:22.743 [2024-07-25 13:51:11.582744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:22.743 [2024-07-25 13:51:11.582785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 [2024-07-25 13:51:11.598711] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:22.743 [2024-07-25 13:51:11.598848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:22.743 [2024-07-25 13:51:11.598876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:22.743 [2024-07-25 13:51:11.598907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 [2024-07-25 13:51:11.615541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:22.743 [2024-07-25 13:51:11.620164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:22.743 [2024-07-25 13:51:11.620233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:22.743 [2024-07-25 13:51:11.620271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 [2024-07-25 13:51:11.635967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:22.743 [2024-07-25 13:51:11.636096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 [2024-07-25 13:51:11.651725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:22.743 [2024-07-25 13:51:11.651873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 [2024-07-25 13:51:11.667997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:22.743 [2024-07-25 13:51:11.668121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:22.743 passed 00:08:22.743 Test: blob_create_snapshot_power_failure ...[2024-07-25 13:51:11.716023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:22.743 [2024-07-25 13:51:11.731250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:22.743 [2024-07-25 13:51:11.761467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:23.021 [2024-07-25 13:51:11.776834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:23.021 passed 00:08:23.021 Test: blob_io_unit ...passed 00:08:23.021 Test: blob_io_unit_compatibility ...passed 00:08:23.021 Test: blob_ext_md_pages ...passed 00:08:23.021 Test: blob_esnap_io_4096_4096 ...passed 00:08:23.021 Test: blob_esnap_io_512_512 ...passed 00:08:23.021 Test: blob_esnap_io_4096_512 ...passed 00:08:23.021 Test: blob_esnap_io_512_4096 ...passed 00:08:23.021 Test: blob_esnap_clone_resize ...passed 00:08:23.021 Suite: blob_bs_copy_extent 00:08:23.279 Test: blob_open ...passed 00:08:23.279 Test: blob_create ...[2024-07-25 13:51:12.118082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:23.279 passed 00:08:23.279 Test: blob_create_loop ...passed 00:08:23.279 Test: blob_create_fail ...[2024-07-25 13:51:12.249290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:23.279 passed 00:08:23.279 Test: blob_create_internal ...passed 00:08:23.536 Test: blob_create_zero_extent ...passed 00:08:23.536 Test: blob_snapshot ...passed 00:08:23.536 Test: blob_clone ...passed 00:08:23.536 Test: blob_inflate ...[2024-07-25 13:51:12.469627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:23.536 passed 00:08:23.536 Test: blob_delete ...passed 00:08:23.536 Test: blob_resize_test ...[2024-07-25 13:51:12.550405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:23.536 passed 00:08:23.793 Test: blob_resize_thin_test ...passed 00:08:23.793 Test: channel_ops ...passed 00:08:23.793 Test: blob_super ...passed 00:08:23.793 Test: blob_rw_verify_iov ...passed 00:08:23.793 Test: blob_unmap ...passed 00:08:23.793 Test: blob_iter ...passed 00:08:24.051 Test: blob_parse_md ...passed 00:08:24.051 Test: bs_load_pending_removal ...passed 00:08:24.051 Test: bs_unload ...[2024-07-25 13:51:12.941169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:24.051 passed 00:08:24.051 Test: bs_usable_clusters ...passed 00:08:24.051 Test: blob_crc ...[2024-07-25 13:51:13.026230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:24.051 [2024-07-25 13:51:13.026387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:24.051 passed 00:08:24.051 Test: blob_flags ...passed 00:08:24.310 Test: bs_version ...passed 00:08:24.310 Test: blob_set_xattrs_test ...[2024-07-25 13:51:13.152106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:24.310 [2024-07-25 13:51:13.152235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:24.310 passed 00:08:24.310 Test: blob_thin_prov_alloc ...passed 00:08:24.568 Test: blob_insert_cluster_msg_test ...passed 00:08:24.568 Test: blob_thin_prov_rw ...passed 00:08:24.568 Test: blob_thin_prov_rle ...passed 00:08:24.568 Test: blob_thin_prov_rw_iov ...passed 00:08:24.568 Test: blob_snapshot_rw ...passed 00:08:24.568 Test: blob_snapshot_rw_iov ...passed 00:08:24.826 Test: blob_inflate_rw ...passed 00:08:24.826 Test: blob_snapshot_freeze_io ...passed 00:08:25.107 Test: blob_operation_split_rw ...passed 00:08:25.372 Test: blob_operation_split_rw_iov ...passed 00:08:25.372 Test: blob_simultaneous_operations ...[2024-07-25 13:51:14.257184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:25.372 [2024-07-25 13:51:14.257292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:25.372 [2024-07-25 13:51:14.258110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:25.372 [2024-07-25 13:51:14.258167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:25.372 [2024-07-25 13:51:14.261427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:25.372 [2024-07-25 13:51:14.261499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:25.372 [2024-07-25 13:51:14.261639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:25.372 [2024-07-25 13:51:14.261664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:25.372 passed 00:08:25.372 Test: blob_persist_test ...passed 00:08:25.372 Test: blob_decouple_snapshot ...passed 00:08:25.630 Test: blob_seek_io_unit ...passed 00:08:25.630 Test: blob_nested_freezes ...passed 00:08:25.630 Test: blob_clone_resize ...passed 00:08:25.630 Test: blob_shallow_copy ...[2024-07-25 13:51:14.570553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:25.630 [2024-07-25 13:51:14.570901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:25.630 [2024-07-25 13:51:14.571702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:25.630 passed 00:08:25.630 Suite: blob_blob_copy_extent 00:08:25.630 Test: blob_write ...passed 00:08:25.888 Test: blob_read ...passed 00:08:25.888 Test: blob_rw_verify ...passed 00:08:25.888 Test: blob_rw_verify_iov_nomem ...passed 00:08:25.888 Test: blob_rw_iov_read_only ...passed 00:08:25.888 Test: blob_xattr ...passed 00:08:25.888 Test: blob_dirty_shutdown ...passed 00:08:26.146 Test: blob_is_degraded ...passed 00:08:26.146 Suite: blob_esnap_bs_copy_extent 00:08:26.146 Test: blob_esnap_create ...passed 00:08:26.146 Test: blob_esnap_thread_add_remove ...passed 00:08:26.146 Test: blob_esnap_clone_snapshot ...passed 00:08:26.146 Test: blob_esnap_clone_inflate ...passed 00:08:26.146 Test: blob_esnap_clone_decouple ...passed 00:08:26.403 Test: blob_esnap_clone_reload ...passed 00:08:26.403 Test: blob_esnap_hotplug ...passed 00:08:26.403 Test: blob_set_parent ...[2024-07-25 13:51:15.293819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:26.403 [2024-07-25 13:51:15.293962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:26.403 [2024-07-25 13:51:15.294113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:26.403 [2024-07-25 13:51:15.294165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:26.403 [2024-07-25 13:51:15.295041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:26.403 passed 00:08:26.403 Test: blob_set_external_parent ...[2024-07-25 13:51:15.339978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:26.403 [2024-07-25 13:51:15.340155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:26.403 [2024-07-25 13:51:15.340223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:26.403 [2024-07-25 13:51:15.341085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:26.403 passed 00:08:26.403 00:08:26.403 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.403 suites 16 16 n/a 0 0 00:08:26.403 tests 376 376 376 0 0 00:08:26.403 asserts 143973 143973 143973 0 n/a 00:08:26.403 00:08:26.403 Elapsed time = 17.235 seconds 00:08:26.662 13:51:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:26.662 00:08:26.662 00:08:26.662 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.662 http://cunit.sourceforge.net/ 00:08:26.662 00:08:26.662 00:08:26.662 Suite: blob_bdev 00:08:26.662 Test: create_bs_dev ...passed 00:08:26.662 Test: create_bs_dev_ro ...[2024-07-25 13:51:15.473912] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:26.662 passed 00:08:26.662 Test: create_bs_dev_rw ...passed 00:08:26.662 Test: claim_bs_dev ...[2024-07-25 13:51:15.474480] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:26.662 passed 00:08:26.662 Test: claim_bs_dev_ro ...passed 00:08:26.662 Test: deferred_destroy_refs ...passed 00:08:26.662 Test: deferred_destroy_channels ...passed 00:08:26.662 Test: deferred_destroy_threads ...passed 00:08:26.662 00:08:26.662 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.662 suites 1 1 n/a 0 0 00:08:26.662 tests 8 8 8 0 0 00:08:26.662 asserts 119 119 119 0 n/a 00:08:26.662 00:08:26.662 Elapsed time = 0.001 seconds 00:08:26.662 13:51:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:26.662 00:08:26.662 00:08:26.662 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.662 http://cunit.sourceforge.net/ 00:08:26.662 00:08:26.662 00:08:26.662 Suite: tree 00:08:26.662 Test: blobfs_tree_op_test ...passed 00:08:26.662 00:08:26.662 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.662 suites 1 1 n/a 0 0 00:08:26.662 tests 1 1 1 0 0 00:08:26.662 asserts 27 27 27 0 n/a 00:08:26.662 00:08:26.662 Elapsed time = 0.000 seconds 00:08:26.662 13:51:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:26.662 00:08:26.662 00:08:26.662 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.662 http://cunit.sourceforge.net/ 00:08:26.662 00:08:26.662 00:08:26.662 Suite: blobfs_async_ut 00:08:26.662 Test: fs_init ...passed 00:08:26.662 Test: fs_open ...passed 00:08:26.662 Test: fs_create ...passed 00:08:26.662 Test: fs_truncate ...passed 00:08:26.662 Test: fs_rename ...[2024-07-25 13:51:15.688543] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:26.662 passed 00:08:26.920 Test: fs_rw_async ...passed 00:08:26.920 Test: fs_writev_readv_async ...passed 00:08:26.920 Test: tree_find_buffer_ut ...passed 00:08:26.920 Test: channel_ops ...passed 00:08:26.920 Test: channel_ops_sync ...passed 00:08:26.920 00:08:26.920 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.920 suites 1 1 n/a 0 0 00:08:26.920 tests 10 10 10 0 0 00:08:26.920 asserts 292 292 292 0 n/a 00:08:26.920 00:08:26.920 Elapsed time = 0.210 seconds 00:08:26.920 13:51:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:26.920 00:08:26.920 00:08:26.920 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.920 http://cunit.sourceforge.net/ 00:08:26.920 00:08:26.920 00:08:26.920 Suite: blobfs_sync_ut 00:08:26.920 Test: cache_read_after_write ...[2024-07-25 13:51:15.903337] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:26.920 passed 00:08:26.920 Test: file_length ...passed 00:08:26.920 Test: append_write_to_extend_blob ...passed 00:08:27.179 Test: partial_buffer ...passed 00:08:27.179 Test: cache_write_null_buffer ...passed 00:08:27.179 Test: fs_create_sync ...passed 00:08:27.179 Test: fs_rename_sync ...passed 00:08:27.179 Test: cache_append_no_cache ...passed 00:08:27.179 Test: fs_delete_file_without_close ...passed 00:08:27.179 00:08:27.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.179 suites 1 1 n/a 0 0 00:08:27.179 tests 9 9 9 0 0 00:08:27.179 asserts 345 345 345 0 n/a 00:08:27.179 00:08:27.179 Elapsed time = 0.430 seconds 00:08:27.179 13:51:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:27.179 00:08:27.179 00:08:27.179 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.179 http://cunit.sourceforge.net/ 00:08:27.179 00:08:27.179 00:08:27.179 Suite: blobfs_bdev_ut 00:08:27.179 Test: spdk_blobfs_bdev_detect_test ...[2024-07-25 13:51:16.123550] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:27.179 passed 00:08:27.179 Test: spdk_blobfs_bdev_create_test ...passed 00:08:27.179 Test: spdk_blobfs_bdev_mount_test ...passed[2024-07-25 13:51:16.124042] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:27.179 00:08:27.179 00:08:27.179 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.179 suites 1 1 n/a 0 0 00:08:27.179 tests 3 3 3 0 0 00:08:27.179 asserts 9 9 9 0 n/a 00:08:27.179 00:08:27.180 Elapsed time = 0.001 seconds 00:08:27.180 00:08:27.180 real 0m18.078s 00:08:27.180 user 0m17.263s 00:08:27.180 sys 0m1.023s 00:08:27.180 13:51:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.180 13:51:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:08:27.180 ************************************ 00:08:27.180 END TEST unittest_blob_blobfs 00:08:27.180 ************************************ 00:08:27.180 13:51:16 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:08:27.180 13:51:16 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.180 13:51:16 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.180 13:51:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.180 ************************************ 00:08:27.180 START TEST unittest_event 00:08:27.180 ************************************ 00:08:27.180 13:51:16 unittest.unittest_event -- common/autotest_common.sh@1125 -- # unittest_event 00:08:27.180 13:51:16 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:27.180 00:08:27.180 00:08:27.180 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.180 http://cunit.sourceforge.net/ 00:08:27.180 00:08:27.180 00:08:27.180 Suite: app_suite 00:08:27.180 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:27.180 00:08:27.180 CPU options: 00:08:27.180 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:27.180 (like [0,1,10]) 00:08:27.180 --lcores lcore to CPU mapping list. The list is in the format: 00:08:27.180 [<,lcores[@CPUs]>...] 00:08:27.180 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:27.180 Within the group, '-' is used for range separator, 00:08:27.180 ',' is used for single number separator. 00:08:27.180 '( )' can be omitted for single element group, 00:08:27.180 '@' can be omitted if cpus and lcores have the same value 00:08:27.180 --disable-cpumask-locks Disable CPU core lock files. 00:08:27.180 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:27.180 pollers in the app support interrupt mode) 00:08:27.180 -p, --main-core main (primary) core for DPDK 00:08:27.180 00:08:27.180 Configuration options: 00:08:27.180 -c, --config, --json JSON config file 00:08:27.180 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:27.180 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:27.180 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:27.180 --rpcs-allowed comma-separated list of permitted RPCS 00:08:27.180 --json-ignore-init-errors don't exit on invalid config entry 00:08:27.180 00:08:27.180 Memory options: 00:08:27.180 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:27.180 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:27.180 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:27.180 -R, --huge-unlink unlink huge files after initialization 00:08:27.180 -n, --mem-channels number of memory channels used for DPDK 00:08:27.180 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:27.180 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:27.180 --no-huge run without using hugepages 00:08:27.180 -i, --shm-id shared memory ID (optional) 00:08:27.180 -g, --single-file-segments force creating just one hugetlbfs file 00:08:27.180 00:08:27.180 PCI options: 00:08:27.180 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:27.180 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:27.180 -u, --no-pci disable PCI access 00:08:27.180 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:27.180 00:08:27.180 Log options: 00:08:27.180 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:27.180 --silence-noticelog disable notice level logging to stderr 00:08:27.180 00:08:27.180 Trace options: 00:08:27.180 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:27.180 setting 0 to disable trace (default 32768) 00:08:27.180 app_ut: invalid option -- 'z' 00:08:27.180 Tracepoints vary in size and can use more than one trace entry. 00:08:27.180 -e, --tpoint-group [:] 00:08:27.180 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:27.180 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:27.180 a tracepoint group. First tpoint inside a group can be enabled by 00:08:27.180 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:27.180 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:27.180 in /include/spdk_internal/trace_defs.h 00:08:27.180 00:08:27.180 Other options: 00:08:27.180 -h, --help show this usage 00:08:27.180 -v, --version print SPDK version 00:08:27.180 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:27.180 --env-context Opaque context for use of the env implementation 00:08:27.180 app_ut [options] 00:08:27.180 00:08:27.180 CPU options: 00:08:27.180 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:27.180 (like [0,1,10]) 00:08:27.180 --lcores lcore to CPU mapping list. The list is in the format: 00:08:27.180 [<,lcores[@CPUs]>...] 00:08:27.180 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:27.180 Within the group, '-' is used for range separator, 00:08:27.180 ',' is used for single number separator. 00:08:27.180 '( )' can be omitted for single element group, 00:08:27.180 '@' can be omitted if cpus and lcores have the same value 00:08:27.180 --disable-cpumask-locks Disable CPU core lock files. 00:08:27.180 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:27.180 pollers in the app support interrupt mode) 00:08:27.180 -p, --main-core main (primary) core for DPDK 00:08:27.180 00:08:27.180 Configuration options: 00:08:27.180 -c, --config, --json JSON config file 00:08:27.180 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:27.180 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:27.180 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:27.180 --rpcs-allowed comma-separated list of permitted RPCS 00:08:27.180 --json-ignore-init-errors don't exit on invalid config entry 00:08:27.180 00:08:27.180 Memory options: 00:08:27.180 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:27.180 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:27.180 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:27.180 -R, --huge-unlink unlink huge files after initialization 00:08:27.180 -n, --mem-channels number of memory channels used for DPDK 00:08:27.180 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:27.180 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:27.180 --no-huge run without using hugepages 00:08:27.180 -i, --shm-id shared memory ID (optional) 00:08:27.180 -g, --single-file-segments force creating just one hugetlbfs file 00:08:27.180 00:08:27.180 PCI options: 00:08:27.180 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:27.180 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:27.180 -u, --no-pci disable PCI access 00:08:27.180 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:27.180 00:08:27.180 Log options: 00:08:27.180 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:27.180 --silence-noticelog disable notice level logging to stderr 00:08:27.180 00:08:27.180 Trace options: 00:08:27.180 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:27.180 setting 0 to disable trace (default 32768) 00:08:27.180 Tracepoints vary in size and can use more than one trace entry. 00:08:27.180 -e, --tpoint-group [:] 00:08:27.180 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:27.180 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:27.180 a tracepoint group. First tpoint inside a group can be enabled by 00:08:27.180 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:27.180 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:27.180 in /include/spdk_internal/trace_defs.h 00:08:27.180 00:08:27.180 Other options: 00:08:27.180 -h, --help show this usage 00:08:27.180 -v, --version print SPDK version 00:08:27.180 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:27.180 --env-context Opaque context for use of the env implementation 00:08:27.180 app_ut: unrecognized option '--test-long-opt' 00:08:27.181 [2024-07-25 13:51:16.212997] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:27.181 [2024-07-25 13:51:16.213838] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:27.181 app_ut [options] 00:08:27.181 00:08:27.181 CPU options: 00:08:27.181 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:27.181 (like [0,1,10]) 00:08:27.181 --lcores lcore to CPU mapping list. The list is in the format: 00:08:27.181 [<,lcores[@CPUs]>...] 00:08:27.181 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:27.181 Within the group, '-' is used for range separator, 00:08:27.181 ',' is used for single number separator. 00:08:27.181 '( )' can be omitted for single element group, 00:08:27.181 '@' can be omitted if cpus and lcores have the same value 00:08:27.181 --disable-cpumask-locks Disable CPU core lock files. 00:08:27.181 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:27.181 pollers in the app support interrupt mode) 00:08:27.181 -p, --main-core main (primary) core for DPDK 00:08:27.181 00:08:27.181 Configuration options: 00:08:27.181 -c, --config, --json JSON config file 00:08:27.181 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:27.181 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:27.181 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:27.181 --rpcs-allowed comma-separated list of permitted RPCS 00:08:27.181 --json-ignore-init-errors don't exit on invalid config entry 00:08:27.181 00:08:27.181 Memory options: 00:08:27.181 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:27.181 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:27.181 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:27.181 -R, --huge-unlink unlink huge files after initialization 00:08:27.181 -n, --mem-channels number of memory channels used for DPDK 00:08:27.181 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:27.181 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:27.181 --no-huge run without using hugepages 00:08:27.181 -i, --shm-id shared memory ID (optional) 00:08:27.181 -g, --single-file-segments force creating just one hugetlbfs file 00:08:27.181 00:08:27.181 PCI options: 00:08:27.181 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:27.181 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:27.181 -u, --no-pci disable PCI access 00:08:27.181 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:27.181 00:08:27.181 Log options: 00:08:27.181 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:27.181 --silence-noticelog disable notice level logging to stderr 00:08:27.181 00:08:27.181 Trace options: 00:08:27.181 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:27.181 setting 0 to disable trace (default 32768) 00:08:27.181 Tracepoints vary in size and can use more than one trace entry. 00:08:27.181 -e, --tpoint-group [:] 00:08:27.181 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:27.181 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:27.181 a tracepoint group. First tpoint inside a group can be enabled by 00:08:27.181 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:27.181 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:27.181 in /include/spdk_internal/trace_defs.h 00:08:27.181 00:08:27.181 Other options: 00:08:27.181 -h, --help show this usage 00:08:27.181 -v, --version print SPDK version 00:08:27.181 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:27.181 --env-context Opaque context for use of the env implementation 00:08:27.181 [2024-07-25 13:51:16.214268] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:27.181 passed 00:08:27.181 00:08:27.181 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.181 suites 1 1 n/a 0 0 00:08:27.181 tests 1 1 1 0 0 00:08:27.181 asserts 8 8 8 0 n/a 00:08:27.181 00:08:27.181 Elapsed time = 0.001 seconds 00:08:27.440 13:51:16 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:27.440 00:08:27.440 00:08:27.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.440 http://cunit.sourceforge.net/ 00:08:27.440 00:08:27.440 00:08:27.440 Suite: app_suite 00:08:27.440 Test: test_create_reactor ...passed 00:08:27.440 Test: test_init_reactors ...passed 00:08:27.440 Test: test_event_call ...passed 00:08:27.440 Test: test_schedule_thread ...passed 00:08:27.440 Test: test_reschedule_thread ...passed 00:08:27.440 Test: test_bind_thread ...passed 00:08:27.440 Test: test_for_each_reactor ...passed 00:08:27.440 Test: test_reactor_stats ...passed 00:08:27.440 Test: test_scheduler ...passed 00:08:27.440 Test: test_governor ...passed 00:08:27.440 00:08:27.440 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.440 suites 1 1 n/a 0 0 00:08:27.440 tests 10 10 10 0 0 00:08:27.440 asserts 344 344 344 0 n/a 00:08:27.440 00:08:27.440 Elapsed time = 0.026 seconds 00:08:27.440 00:08:27.440 real 0m0.104s 00:08:27.440 user 0m0.069s 00:08:27.440 sys 0m0.032s 00:08:27.440 13:51:16 unittest.unittest_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.440 13:51:16 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:08:27.440 ************************************ 00:08:27.440 END TEST unittest_event 00:08:27.440 ************************************ 00:08:27.440 13:51:16 unittest -- unit/unittest.sh@235 -- # uname -s 00:08:27.440 13:51:16 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:08:27.440 13:51:16 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:08:27.440 13:51:16 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.440 13:51:16 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.440 13:51:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.440 ************************************ 00:08:27.440 START TEST unittest_ftl 00:08:27.440 ************************************ 00:08:27.440 13:51:16 unittest.unittest_ftl -- common/autotest_common.sh@1125 -- # unittest_ftl 00:08:27.440 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:27.440 00:08:27.440 00:08:27.440 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.440 http://cunit.sourceforge.net/ 00:08:27.440 00:08:27.440 00:08:27.440 Suite: ftl_band_suite 00:08:27.440 Test: test_band_block_offset_from_addr_base ...passed 00:08:27.440 Test: test_band_block_offset_from_addr_offset ...passed 00:08:27.440 Test: test_band_addr_from_block_offset ...passed 00:08:27.698 Test: test_band_set_addr ...passed 00:08:27.698 Test: test_invalidate_addr ...passed 00:08:27.698 Test: test_next_xfer_addr ...passed 00:08:27.698 00:08:27.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.698 suites 1 1 n/a 0 0 00:08:27.698 tests 6 6 6 0 0 00:08:27.698 asserts 30356 30356 30356 0 n/a 00:08:27.698 00:08:27.698 Elapsed time = 0.168 seconds 00:08:27.698 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:27.698 00:08:27.698 00:08:27.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.698 http://cunit.sourceforge.net/ 00:08:27.698 00:08:27.698 00:08:27.698 Suite: ftl_bitmap 00:08:27.698 Test: test_ftl_bitmap_create ...[2024-07-25 13:51:16.617817] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:27.698 passed 00:08:27.698 Test: test_ftl_bitmap_get ...[2024-07-25 13:51:16.618092] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:27.698 passed 00:08:27.698 Test: test_ftl_bitmap_set ...passed 00:08:27.698 Test: test_ftl_bitmap_clear ...passed 00:08:27.698 Test: test_ftl_bitmap_find_first_set ...passed 00:08:27.698 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:27.698 Test: test_ftl_bitmap_count_set ...passed 00:08:27.698 00:08:27.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.698 suites 1 1 n/a 0 0 00:08:27.698 tests 7 7 7 0 0 00:08:27.698 asserts 137 137 137 0 n/a 00:08:27.698 00:08:27.698 Elapsed time = 0.001 seconds 00:08:27.698 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:27.698 00:08:27.698 00:08:27.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.698 http://cunit.sourceforge.net/ 00:08:27.698 00:08:27.698 00:08:27.698 Suite: ftl_io_suite 00:08:27.698 Test: test_completion ...passed 00:08:27.698 Test: test_multiple_ios ...passed 00:08:27.698 00:08:27.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.698 suites 1 1 n/a 0 0 00:08:27.698 tests 2 2 2 0 0 00:08:27.698 asserts 47 47 47 0 n/a 00:08:27.698 00:08:27.698 Elapsed time = 0.003 seconds 00:08:27.698 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:27.698 00:08:27.698 00:08:27.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.698 http://cunit.sourceforge.net/ 00:08:27.698 00:08:27.698 00:08:27.698 Suite: ftl_mngt 00:08:27.698 Test: test_next_step ...passed 00:08:27.698 Test: test_continue_step ...passed 00:08:27.698 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:27.698 Test: test_fail_step ...passed 00:08:27.698 Test: test_mngt_call_and_call_rollback ...passed 00:08:27.698 Test: test_nested_process_failure ...passed 00:08:27.698 Test: test_call_init_success ...passed 00:08:27.698 Test: test_call_init_failure ...passed 00:08:27.698 00:08:27.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.698 suites 1 1 n/a 0 0 00:08:27.698 tests 8 8 8 0 0 00:08:27.698 asserts 196 196 196 0 n/a 00:08:27.698 00:08:27.698 Elapsed time = 0.002 seconds 00:08:27.698 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:27.698 00:08:27.698 00:08:27.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.698 http://cunit.sourceforge.net/ 00:08:27.698 00:08:27.698 00:08:27.698 Suite: ftl_mempool 00:08:27.698 Test: test_ftl_mempool_create ...passed 00:08:27.698 Test: test_ftl_mempool_get_put ...passed 00:08:27.698 00:08:27.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.698 suites 1 1 n/a 0 0 00:08:27.698 tests 2 2 2 0 0 00:08:27.698 asserts 36 36 36 0 n/a 00:08:27.698 00:08:27.698 Elapsed time = 0.000 seconds 00:08:27.956 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:27.956 00:08:27.956 00:08:27.956 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.956 http://cunit.sourceforge.net/ 00:08:27.956 00:08:27.956 00:08:27.956 Suite: ftl_addr64_suite 00:08:27.956 Test: test_addr_cached ...passed 00:08:27.956 00:08:27.956 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.956 suites 1 1 n/a 0 0 00:08:27.956 tests 1 1 1 0 0 00:08:27.956 asserts 1536 1536 1536 0 n/a 00:08:27.956 00:08:27.956 Elapsed time = 0.000 seconds 00:08:27.956 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:27.956 00:08:27.956 00:08:27.956 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.956 http://cunit.sourceforge.net/ 00:08:27.956 00:08:27.956 00:08:27.956 Suite: ftl_sb 00:08:27.956 Test: test_sb_crc_v2 ...passed 00:08:27.956 Test: test_sb_crc_v3 ...passed 00:08:27.956 Test: test_sb_v3_md_layout ...[2024-07-25 13:51:16.795010] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:27.956 [2024-07-25 13:51:16.795399] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:27.956 [2024-07-25 13:51:16.795470] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:27.956 [2024-07-25 13:51:16.795519] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:27.956 [2024-07-25 13:51:16.795552] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:27.957 [2024-07-25 13:51:16.795677] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:27.957 [2024-07-25 13:51:16.795719] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:27.957 [2024-07-25 13:51:16.795779] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:27.957 [2024-07-25 13:51:16.795902] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:27.957 [2024-07-25 13:51:16.795962] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:27.957 passed 00:08:27.957 Test: test_sb_v5_md_layout ...[2024-07-25 13:51:16.796017] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:27.957 passed 00:08:27.957 00:08:27.957 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.957 suites 1 1 n/a 0 0 00:08:27.957 tests 4 4 4 0 0 00:08:27.957 asserts 160 160 160 0 n/a 00:08:27.957 00:08:27.957 Elapsed time = 0.002 seconds 00:08:27.957 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:27.957 00:08:27.957 00:08:27.957 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.957 http://cunit.sourceforge.net/ 00:08:27.957 00:08:27.957 00:08:27.957 Suite: ftl_layout_upgrade 00:08:27.957 Test: test_l2p_upgrade ...passed 00:08:27.957 00:08:27.957 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.957 suites 1 1 n/a 0 0 00:08:27.957 tests 1 1 1 0 0 00:08:27.957 asserts 152 152 152 0 n/a 00:08:27.957 00:08:27.957 Elapsed time = 0.001 seconds 00:08:27.957 13:51:16 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:08:27.957 00:08:27.957 00:08:27.957 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.957 http://cunit.sourceforge.net/ 00:08:27.957 00:08:27.957 00:08:27.957 Suite: ftl_p2l_suite 00:08:27.957 Test: test_p2l_num_pages ...passed 00:08:28.523 Test: test_ckpt_issue ...passed 00:08:29.089 Test: test_persist_band_p2l ...passed 00:08:29.347 Test: test_clean_restore_p2l ...passed 00:08:30.737 Test: test_dirty_restore_p2l ...passed 00:08:30.737 00:08:30.737 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.737 suites 1 1 n/a 0 0 00:08:30.737 tests 5 5 5 0 0 00:08:30.737 asserts 10020 10020 10020 0 n/a 00:08:30.737 00:08:30.737 Elapsed time = 2.633 seconds 00:08:30.737 00:08:30.737 real 0m3.159s 00:08:30.737 user 0m1.029s 00:08:30.737 sys 0m2.131s 00:08:30.737 13:51:19 unittest.unittest_ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.737 ************************************ 00:08:30.737 END TEST unittest_ftl 00:08:30.737 ************************************ 00:08:30.737 13:51:19 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 13:51:19 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:30.737 13:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.737 13:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.737 13:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 ************************************ 00:08:30.737 START TEST unittest_accel 00:08:30.737 ************************************ 00:08:30.737 13:51:19 unittest.unittest_accel -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:30.737 00:08:30.737 00:08:30.737 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.737 http://cunit.sourceforge.net/ 00:08:30.737 00:08:30.737 00:08:30.737 Suite: accel_sequence 00:08:30.737 Test: test_sequence_fill_copy ...passed 00:08:30.737 Test: test_sequence_abort ...passed 00:08:30.737 Test: test_sequence_append_error ...passed 00:08:30.737 Test: test_sequence_completion_error ...[2024-07-25 13:51:19.593768] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f9db7b547c0 00:08:30.737 [2024-07-25 13:51:19.594241] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f9db7b547c0 00:08:30.737 [2024-07-25 13:51:19.594410] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f9db7b547c0 00:08:30.738 [2024-07-25 13:51:19.594474] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f9db7b547c0 00:08:30.738 passed 00:08:30.738 Test: test_sequence_decompress ...passed 00:08:30.738 Test: test_sequence_reverse ...passed 00:08:30.738 Test: test_sequence_copy_elision ...passed 00:08:30.738 Test: test_sequence_accel_buffers ...passed 00:08:30.738 Test: test_sequence_memory_domain ...[2024-07-25 13:51:19.607072] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:30.738 [2024-07-25 13:51:19.607270] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:30.738 passed 00:08:30.738 Test: test_sequence_module_memory_domain ...passed 00:08:30.738 Test: test_sequence_crypto ...passed 00:08:30.738 Test: test_sequence_driver ...[2024-07-25 13:51:19.614853] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f9db6e037c0 using driver: ut 00:08:30.738 [2024-07-25 13:51:19.615023] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f9db6e037c0 through driver: ut 00:08:30.738 passed 00:08:30.738 Test: test_sequence_same_iovs ...passed 00:08:30.738 Test: test_sequence_crc32 ...passed 00:08:30.738 Suite: accel 00:08:30.738 Test: test_spdk_accel_task_complete ...passed 00:08:30.738 Test: test_get_task ...passed 00:08:30.738 Test: test_spdk_accel_submit_copy ...passed 00:08:30.738 Test: test_spdk_accel_submit_dualcast ...[2024-07-25 13:51:19.621259] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:30.738 passed 00:08:30.738 Test: test_spdk_accel_submit_compare ...passed 00:08:30.738 Test: test_spdk_accel_submit_fill ...[2024-07-25 13:51:19.621354] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:30.738 passed 00:08:30.738 Test: test_spdk_accel_submit_crc32c ...passed 00:08:30.738 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:30.738 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:30.738 Test: test_spdk_accel_submit_xor ...passed 00:08:30.738 Test: test_spdk_accel_module_find_by_name ...passed 00:08:30.738 Test: test_spdk_accel_module_register ...passed 00:08:30.738 00:08:30.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.738 suites 2 2 n/a 0 0 00:08:30.738 tests 26 26 26 0 0 00:08:30.738 asserts 830 830 830 0 n/a 00:08:30.738 00:08:30.738 Elapsed time = 0.041 seconds 00:08:30.738 00:08:30.738 real 0m0.080s 00:08:30.738 user 0m0.020s 00:08:30.738 sys 0m0.060s 00:08:30.738 13:51:19 unittest.unittest_accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.738 13:51:19 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.738 ************************************ 00:08:30.738 END TEST unittest_accel 00:08:30.738 ************************************ 00:08:30.738 13:51:19 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:30.738 ************************************ 00:08:30.738 START TEST unittest_ioat 00:08:30.738 ************************************ 00:08:30.738 13:51:19 unittest.unittest_ioat -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:30.738 00:08:30.738 00:08:30.738 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.738 http://cunit.sourceforge.net/ 00:08:30.738 00:08:30.738 00:08:30.738 Suite: ioat 00:08:30.738 Test: ioat_state_check ...passed 00:08:30.738 00:08:30.738 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.738 suites 1 1 n/a 0 0 00:08:30.738 tests 1 1 1 0 0 00:08:30.738 asserts 32 32 32 0 n/a 00:08:30.738 00:08:30.738 Elapsed time = 0.000 seconds 00:08:30.738 00:08:30.738 real 0m0.032s 00:08:30.738 user 0m0.004s 00:08:30.738 sys 0m0.028s 00:08:30.738 13:51:19 unittest.unittest_ioat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.738 13:51:19 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:08:30.738 ************************************ 00:08:30.738 END TEST unittest_ioat 00:08:30.738 ************************************ 00:08:30.738 13:51:19 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:30.738 13:51:19 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.738 13:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:30.738 ************************************ 00:08:30.738 START TEST unittest_idxd_user 00:08:30.738 ************************************ 00:08:30.738 13:51:19 unittest.unittest_idxd_user -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:30.998 00:08:30.998 00:08:30.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.998 http://cunit.sourceforge.net/ 00:08:30.998 00:08:30.998 00:08:30.998 Suite: idxd_user 00:08:30.998 Test: test_idxd_wait_cmd ...[2024-07-25 13:51:19.785932] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:30.998 passed 00:08:30.998 Test: test_idxd_reset_dev ...[2024-07-25 13:51:19.786175] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:30.998 [2024-07-25 13:51:19.786283] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:30.998 passed 00:08:30.998 Test: test_idxd_group_config ...passed 00:08:30.998 Test: test_idxd_wq_config ...passed 00:08:30.998 00:08:30.998 [2024-07-25 13:51:19.786322] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:30.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.998 suites 1 1 n/a 0 0 00:08:30.998 tests 4 4 4 0 0 00:08:30.998 asserts 20 20 20 0 n/a 00:08:30.998 00:08:30.998 Elapsed time = 0.001 seconds 00:08:30.998 00:08:30.998 real 0m0.028s 00:08:30.998 user 0m0.016s 00:08:30.998 sys 0m0.013s 00:08:30.998 13:51:19 unittest.unittest_idxd_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.998 13:51:19 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 ************************************ 00:08:30.998 END TEST unittest_idxd_user 00:08:30.998 ************************************ 00:08:30.998 13:51:19 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:08:30.998 13:51:19 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.998 13:51:19 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.998 13:51:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 ************************************ 00:08:30.998 START TEST unittest_iscsi 00:08:30.998 ************************************ 00:08:30.998 13:51:19 unittest.unittest_iscsi -- common/autotest_common.sh@1125 -- # unittest_iscsi 00:08:30.998 13:51:19 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:30.998 00:08:30.998 00:08:30.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.998 http://cunit.sourceforge.net/ 00:08:30.998 00:08:30.998 00:08:30.998 Suite: conn_suite 00:08:30.998 Test: read_task_split_in_order_case ...passed 00:08:30.998 Test: read_task_split_reverse_order_case ...passed 00:08:30.998 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:30.998 Test: process_non_read_task_completion_test ...passed 00:08:30.998 Test: free_tasks_on_connection ...passed 00:08:30.998 Test: free_tasks_with_queued_datain ...passed 00:08:30.998 Test: abort_queued_datain_task_test ...passed 00:08:30.998 Test: abort_queued_datain_tasks_test ...passed 00:08:30.998 00:08:30.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.998 suites 1 1 n/a 0 0 00:08:30.998 tests 8 8 8 0 0 00:08:30.998 asserts 230 230 230 0 n/a 00:08:30.998 00:08:30.998 Elapsed time = 0.000 seconds 00:08:30.998 13:51:19 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:30.998 00:08:30.998 00:08:30.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.998 http://cunit.sourceforge.net/ 00:08:30.998 00:08:30.998 00:08:30.998 Suite: iscsi_suite 00:08:30.998 Test: param_negotiation_test ...passed 00:08:30.998 Test: list_negotiation_test ...passed 00:08:30.998 Test: parse_valid_test ...passed 00:08:30.998 Test: parse_invalid_test ...[2024-07-25 13:51:19.901345] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:30.998 [2024-07-25 13:51:19.902037] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:30.998 [2024-07-25 13:51:19.902202] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:30.998 [2024-07-25 13:51:19.902387] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:30.998 [2024-07-25 13:51:19.902653] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:30.998 [2024-07-25 13:51:19.902832] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:30.998 [2024-07-25 13:51:19.903068] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:30.998 passed 00:08:30.998 00:08:30.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.998 suites 1 1 n/a 0 0 00:08:30.998 tests 4 4 4 0 0 00:08:30.998 asserts 161 161 161 0 n/a 00:08:30.998 00:08:30.998 Elapsed time = 0.004 seconds 00:08:30.998 13:51:19 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:30.998 00:08:30.998 00:08:30.998 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.998 http://cunit.sourceforge.net/ 00:08:30.998 00:08:30.998 00:08:30.998 Suite: iscsi_target_node_suite 00:08:30.998 Test: add_lun_test_cases ...[2024-07-25 13:51:19.935313] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:30.998 [2024-07-25 13:51:19.935615] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:30.998 [2024-07-25 13:51:19.935705] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:30.998 [2024-07-25 13:51:19.935751] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:30.998 [2024-07-25 13:51:19.935782] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:30.998 passed 00:08:30.998 Test: allow_any_allowed ...passed 00:08:30.998 Test: allow_ipv6_allowed ...passed 00:08:30.998 Test: allow_ipv6_denied ...passed 00:08:30.998 Test: allow_ipv6_invalid ...passed 00:08:30.998 Test: allow_ipv4_allowed ...passed 00:08:30.998 Test: allow_ipv4_denied ...passed 00:08:30.998 Test: allow_ipv4_invalid ...passed 00:08:30.998 Test: node_access_allowed ...passed 00:08:30.998 Test: node_access_denied_by_empty_netmask ...passed 00:08:30.998 Test: node_access_multi_initiator_groups_cases ...passed 00:08:30.998 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:30.998 Test: chap_param_test_cases ...[2024-07-25 13:51:19.936140] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:30.998 [2024-07-25 13:51:19.936180] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:30.998 [2024-07-25 13:51:19.936232] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:30.998 [2024-07-25 13:51:19.936258] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:30.998 passed 00:08:30.998 00:08:30.998 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.998 suites 1 1 n/a 0 0 00:08:30.998 tests 13 13 13 0 0 00:08:30.998 asserts 50 50 50 0 n/a 00:08:30.998 00:08:30.999 Elapsed time = 0.001 seconds 00:08:30.999 [2024-07-25 13:51:19.936294] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:30.999 13:51:19 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:30.999 00:08:30.999 00:08:30.999 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.999 http://cunit.sourceforge.net/ 00:08:30.999 00:08:30.999 00:08:30.999 Suite: iscsi_suite 00:08:30.999 Test: op_login_check_target_test ...[2024-07-25 13:51:19.967047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:08:30.999 passed 00:08:30.999 Test: op_login_session_normal_test ...[2024-07-25 13:51:19.967428] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:30.999 [2024-07-25 13:51:19.967491] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:30.999 [2024-07-25 13:51:19.967538] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:30.999 [2024-07-25 13:51:19.967611] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:30.999 [2024-07-25 13:51:19.967741] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:30.999 [2024-07-25 13:51:19.967853] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:30.999 passed 00:08:30.999 Test: maxburstlength_test ...[2024-07-25 13:51:19.967921] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:30.999 [2024-07-25 13:51:19.968212] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:30.999 passed 00:08:30.999 Test: underflow_for_read_transfer_test ...[2024-07-25 13:51:19.968289] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:30.999 passed 00:08:30.999 Test: underflow_for_zero_read_transfer_test ...passed 00:08:30.999 Test: underflow_for_request_sense_test ...passed 00:08:30.999 Test: underflow_for_check_condition_test ...passed 00:08:30.999 Test: add_transfer_task_test ...passed 00:08:30.999 Test: get_transfer_task_test ...passed 00:08:30.999 Test: del_transfer_task_test ...passed 00:08:30.999 Test: clear_all_transfer_tasks_test ...passed 00:08:30.999 Test: build_iovs_test ...passed 00:08:30.999 Test: build_iovs_with_md_test ...passed 00:08:30.999 Test: pdu_hdr_op_login_test ...[2024-07-25 13:51:19.969938] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:30.999 [2024-07-25 13:51:19.970074] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:30.999 [2024-07-25 13:51:19.970175] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_text_test ...[2024-07-25 13:51:19.970292] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:30.999 [2024-07-25 13:51:19.970394] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:30.999 [2024-07-25 13:51:19.970448] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_logout_test ...[2024-07-25 13:51:19.970543] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_scsi_test ...[2024-07-25 13:51:19.970711] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:30.999 [2024-07-25 13:51:19.970760] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:30.999 [2024-07-25 13:51:19.970822] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:30.999 [2024-07-25 13:51:19.970915] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:30.999 [2024-07-25 13:51:19.971006] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:30.999 [2024-07-25 13:51:19.971191] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-25 13:51:19.971310] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:30.999 [2024-07-25 13:51:19.971413] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_nopout_test ...[2024-07-25 13:51:19.971631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:30.999 [2024-07-25 13:51:19.971739] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:30.999 [2024-07-25 13:51:19.971784] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:30.999 [2024-07-25 13:51:19.971822] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:30.999 passed 00:08:30.999 Test: pdu_hdr_op_data_test ...[2024-07-25 13:51:19.971873] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:30.999 [2024-07-25 13:51:19.971943] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:30.999 [2024-07-25 13:51:19.972004] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:30.999 [2024-07-25 13:51:19.972063] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:30.999 [2024-07-25 13:51:19.972132] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:30.999 [2024-07-25 13:51:19.972209] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:30.999 passed 00:08:30.999 Test: empty_text_with_cbit_test ...[2024-07-25 13:51:19.972261] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:30.999 passed 00:08:30.999 Test: pdu_payload_read_test ...[2024-07-25 13:51:19.974430] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:30.999 passed 00:08:30.999 Test: data_out_pdu_sequence_test ...passed 00:08:30.999 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:30.999 00:08:30.999 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.999 suites 1 1 n/a 0 0 00:08:30.999 tests 24 24 24 0 0 00:08:30.999 asserts 150253 150253 150253 0 n/a 00:08:30.999 00:08:30.999 Elapsed time = 0.018 seconds 00:08:30.999 13:51:20 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:30.999 00:08:30.999 00:08:30.999 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.999 http://cunit.sourceforge.net/ 00:08:30.999 00:08:30.999 00:08:30.999 Suite: init_grp_suite 00:08:30.999 Test: create_initiator_group_success_case ...passed 00:08:30.999 Test: find_initiator_group_success_case ...passed 00:08:30.999 Test: register_initiator_group_twice_case ...passed 00:08:30.999 Test: add_initiator_name_success_case ...passed 00:08:30.999 Test: add_initiator_name_fail_case ...[2024-07-25 13:51:20.015494] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:30.999 passed 00:08:30.999 Test: delete_all_initiator_names_success_case ...passed 00:08:30.999 Test: add_netmask_success_case ...passed 00:08:30.999 Test: add_netmask_fail_case ...[2024-07-25 13:51:20.016137] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:30.999 passed 00:08:30.999 Test: delete_all_netmasks_success_case ...passed 00:08:30.999 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:30.999 Test: netmask_overwrite_all_to_any_case ...passed 00:08:30.999 Test: add_delete_initiator_names_case ...passed 00:08:30.999 Test: add_duplicated_initiator_names_case ...passed 00:08:30.999 Test: delete_nonexisting_initiator_names_case ...passed 00:08:30.999 Test: add_delete_netmasks_case ...passed 00:08:30.999 Test: add_duplicated_netmasks_case ...passed 00:08:30.999 Test: delete_nonexisting_netmasks_case ...passed 00:08:30.999 00:08:30.999 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.999 suites 1 1 n/a 0 0 00:08:30.999 tests 17 17 17 0 0 00:08:30.999 asserts 108 108 108 0 n/a 00:08:30.999 00:08:30.999 Elapsed time = 0.002 seconds 00:08:30.999 13:51:20 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:31.259 00:08:31.259 00:08:31.259 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.259 http://cunit.sourceforge.net/ 00:08:31.259 00:08:31.259 00:08:31.259 Suite: portal_grp_suite 00:08:31.259 Test: portal_create_ipv4_normal_case ...passed 00:08:31.259 Test: portal_create_ipv6_normal_case ...passed 00:08:31.259 Test: portal_create_ipv4_wildcard_case ...passed 00:08:31.259 Test: portal_create_ipv6_wildcard_case ...passed 00:08:31.259 Test: portal_create_twice_case ...[2024-07-25 13:51:20.048733] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:31.259 passed 00:08:31.259 Test: portal_grp_register_unregister_case ...passed 00:08:31.259 Test: portal_grp_register_twice_case ...passed 00:08:31.259 Test: portal_grp_add_delete_case ...passed 00:08:31.259 Test: portal_grp_add_delete_twice_case ...passed 00:08:31.259 00:08:31.259 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.259 suites 1 1 n/a 0 0 00:08:31.259 tests 9 9 9 0 0 00:08:31.259 asserts 44 44 44 0 n/a 00:08:31.259 00:08:31.259 Elapsed time = 0.004 seconds 00:08:31.259 00:08:31.259 real 0m0.219s 00:08:31.259 user 0m0.131s 00:08:31.259 sys 0m0.089s 00:08:31.259 13:51:20 unittest.unittest_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.259 13:51:20 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 ************************************ 00:08:31.259 END TEST unittest_iscsi 00:08:31.259 ************************************ 00:08:31.259 13:51:20 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:31.259 13:51:20 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.259 13:51:20 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.259 13:51:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:31.259 ************************************ 00:08:31.259 START TEST unittest_json 00:08:31.259 ************************************ 00:08:31.259 13:51:20 unittest.unittest_json -- common/autotest_common.sh@1125 -- # unittest_json 00:08:31.259 13:51:20 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:31.259 00:08:31.259 00:08:31.259 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.259 http://cunit.sourceforge.net/ 00:08:31.259 00:08:31.259 00:08:31.259 Suite: json 00:08:31.259 Test: test_parse_literal ...passed 00:08:31.259 Test: test_parse_string_simple ...passed 00:08:31.259 Test: test_parse_string_control_chars ...passed 00:08:31.259 Test: test_parse_string_utf8 ...passed 00:08:31.259 Test: test_parse_string_escapes_twochar ...passed 00:08:31.259 Test: test_parse_string_escapes_unicode ...passed 00:08:31.259 Test: test_parse_number ...passed 00:08:31.259 Test: test_parse_array ...passed 00:08:31.259 Test: test_parse_object ...passed 00:08:31.259 Test: test_parse_nesting ...passed 00:08:31.259 Test: test_parse_comment ...passed 00:08:31.259 00:08:31.259 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.259 suites 1 1 n/a 0 0 00:08:31.259 tests 11 11 11 0 0 00:08:31.259 asserts 1516 1516 1516 0 n/a 00:08:31.259 00:08:31.259 Elapsed time = 0.002 seconds 00:08:31.259 13:51:20 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:31.259 00:08:31.259 00:08:31.259 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.259 http://cunit.sourceforge.net/ 00:08:31.259 00:08:31.259 00:08:31.259 Suite: json 00:08:31.259 Test: test_strequal ...passed 00:08:31.259 Test: test_num_to_uint16 ...passed 00:08:31.259 Test: test_num_to_int32 ...passed 00:08:31.259 Test: test_num_to_uint64 ...passed 00:08:31.259 Test: test_decode_object ...passed 00:08:31.259 Test: test_decode_array ...passed 00:08:31.259 Test: test_decode_bool ...passed 00:08:31.259 Test: test_decode_uint16 ...passed 00:08:31.259 Test: test_decode_int32 ...passed 00:08:31.259 Test: test_decode_uint32 ...passed 00:08:31.259 Test: test_decode_uint64 ...passed 00:08:31.259 Test: test_decode_string ...passed 00:08:31.259 Test: test_decode_uuid ...passed 00:08:31.259 Test: test_find ...passed 00:08:31.259 Test: test_find_array ...passed 00:08:31.259 Test: test_iterating ...passed 00:08:31.259 Test: test_free_object ...passed 00:08:31.259 00:08:31.259 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.259 suites 1 1 n/a 0 0 00:08:31.259 tests 17 17 17 0 0 00:08:31.259 asserts 236 236 236 0 n/a 00:08:31.259 00:08:31.259 Elapsed time = 0.001 seconds 00:08:31.259 13:51:20 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:31.259 00:08:31.259 00:08:31.259 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.259 http://cunit.sourceforge.net/ 00:08:31.259 00:08:31.259 00:08:31.259 Suite: json 00:08:31.259 Test: test_write_literal ...passed 00:08:31.259 Test: test_write_string_simple ...passed 00:08:31.259 Test: test_write_string_escapes ...passed 00:08:31.259 Test: test_write_string_utf16le ...passed 00:08:31.259 Test: test_write_number_int32 ...passed 00:08:31.259 Test: test_write_number_uint32 ...passed 00:08:31.259 Test: test_write_number_uint128 ...passed 00:08:31.259 Test: test_write_string_number_uint128 ...passed 00:08:31.259 Test: test_write_number_int64 ...passed 00:08:31.259 Test: test_write_number_uint64 ...passed 00:08:31.259 Test: test_write_number_double ...passed 00:08:31.259 Test: test_write_uuid ...passed 00:08:31.259 Test: test_write_array ...passed 00:08:31.259 Test: test_write_object ...passed 00:08:31.259 Test: test_write_nesting ...passed 00:08:31.259 Test: test_write_val ...passed 00:08:31.259 00:08:31.259 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.259 suites 1 1 n/a 0 0 00:08:31.259 tests 16 16 16 0 0 00:08:31.259 asserts 918 918 918 0 n/a 00:08:31.259 00:08:31.259 Elapsed time = 0.005 seconds 00:08:31.259 13:51:20 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:31.260 00:08:31.260 00:08:31.260 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.260 http://cunit.sourceforge.net/ 00:08:31.260 00:08:31.260 00:08:31.260 Suite: jsonrpc 00:08:31.260 Test: test_parse_request ...passed 00:08:31.260 Test: test_parse_request_streaming ...passed 00:08:31.260 00:08:31.260 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.260 suites 1 1 n/a 0 0 00:08:31.260 tests 2 2 2 0 0 00:08:31.260 asserts 289 289 289 0 n/a 00:08:31.260 00:08:31.260 Elapsed time = 0.004 seconds 00:08:31.260 00:08:31.260 real 0m0.137s 00:08:31.260 user 0m0.091s 00:08:31.260 sys 0m0.047s 00:08:31.260 13:51:20 unittest.unittest_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.260 13:51:20 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:31.260 ************************************ 00:08:31.260 END TEST unittest_json 00:08:31.260 ************************************ 00:08:31.260 13:51:20 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:31.260 13:51:20 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.260 13:51:20 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.260 13:51:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:31.260 ************************************ 00:08:31.260 START TEST unittest_rpc 00:08:31.260 ************************************ 00:08:31.260 13:51:20 unittest.unittest_rpc -- common/autotest_common.sh@1125 -- # unittest_rpc 00:08:31.260 13:51:20 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:31.518 00:08:31.518 00:08:31.518 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.518 http://cunit.sourceforge.net/ 00:08:31.518 00:08:31.518 00:08:31.518 Suite: rpc 00:08:31.518 Test: test_jsonrpc_handler ...passed 00:08:31.518 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:31.518 Test: test_rpc_get_methods ...[2024-07-25 13:51:20.312268] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:31.518 passed 00:08:31.518 Test: test_rpc_spdk_get_version ...passed 00:08:31.518 Test: test_spdk_rpc_listen_close ...passed 00:08:31.518 Test: test_rpc_run_multiple_servers ...passed 00:08:31.518 00:08:31.518 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.518 suites 1 1 n/a 0 0 00:08:31.518 tests 6 6 6 0 0 00:08:31.518 asserts 23 23 23 0 n/a 00:08:31.518 00:08:31.518 Elapsed time = 0.001 seconds 00:08:31.518 00:08:31.518 real 0m0.032s 00:08:31.518 user 0m0.008s 00:08:31.518 sys 0m0.024s 00:08:31.518 13:51:20 unittest.unittest_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.518 13:51:20 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.518 ************************************ 00:08:31.518 END TEST unittest_rpc 00:08:31.518 ************************************ 00:08:31.518 13:51:20 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:31.518 13:51:20 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.518 13:51:20 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.518 13:51:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:31.518 ************************************ 00:08:31.518 START TEST unittest_notify 00:08:31.518 ************************************ 00:08:31.518 13:51:20 unittest.unittest_notify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:31.518 00:08:31.518 00:08:31.518 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.518 http://cunit.sourceforge.net/ 00:08:31.518 00:08:31.518 00:08:31.518 Suite: app_suite 00:08:31.518 Test: notify ...passed 00:08:31.518 00:08:31.518 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.518 suites 1 1 n/a 0 0 00:08:31.518 tests 1 1 1 0 0 00:08:31.518 asserts 13 13 13 0 n/a 00:08:31.518 00:08:31.518 Elapsed time = 0.000 seconds 00:08:31.518 00:08:31.518 real 0m0.030s 00:08:31.518 user 0m0.012s 00:08:31.518 sys 0m0.019s 00:08:31.518 13:51:20 unittest.unittest_notify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.518 13:51:20 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:31.518 ************************************ 00:08:31.518 END TEST unittest_notify 00:08:31.518 ************************************ 00:08:31.519 13:51:20 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:31.519 13:51:20 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.519 13:51:20 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.519 13:51:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:31.519 ************************************ 00:08:31.519 START TEST unittest_nvme 00:08:31.519 ************************************ 00:08:31.519 13:51:20 unittest.unittest_nvme -- common/autotest_common.sh@1125 -- # unittest_nvme 00:08:31.519 13:51:20 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:31.519 00:08:31.519 00:08:31.519 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.519 http://cunit.sourceforge.net/ 00:08:31.519 00:08:31.519 00:08:31.519 Suite: nvme 00:08:31.519 Test: test_opc_data_transfer ...passed 00:08:31.519 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:31.519 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:31.519 Test: test_trid_parse_and_compare ...[2024-07-25 13:51:20.464885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:31.519 [2024-07-25 13:51:20.465345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:31.519 [2024-07-25 13:51:20.465516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:31.519 [2024-07-25 13:51:20.465578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:31.519 [2024-07-25 13:51:20.465626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:08:31.519 [2024-07-25 13:51:20.465738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:31.519 passed 00:08:31.519 Test: test_trid_trtype_str ...passed 00:08:31.519 Test: test_trid_adrfam_str ...passed 00:08:31.519 Test: test_nvme_ctrlr_probe ...[2024-07-25 13:51:20.466141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:31.519 passed 00:08:31.519 Test: test_spdk_nvme_probe ...[2024-07-25 13:51:20.466292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:31.519 [2024-07-25 13:51:20.466352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:31.519 [2024-07-25 13:51:20.466476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:31.519 [2024-07-25 13:51:20.466539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:31.519 passed 00:08:31.519 Test: test_spdk_nvme_connect ...[2024-07-25 13:51:20.466691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:31.519 [2024-07-25 13:51:20.467187] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:31.519 passed 00:08:31.519 Test: test_nvme_ctrlr_probe_internal ...[2024-07-25 13:51:20.467417] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:31.519 passed 00:08:31.519 Test: test_nvme_init_controllers ...[2024-07-25 13:51:20.467489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:31.519 [2024-07-25 13:51:20.467607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:31.519 passed 00:08:31.519 Test: test_nvme_driver_init ...[2024-07-25 13:51:20.467742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:31.519 [2024-07-25 13:51:20.467800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:31.777 [2024-07-25 13:51:20.576312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:31.777 passed 00:08:31.777 Test: test_spdk_nvme_detach ...passed 00:08:31.777 Test: test_nvme_completion_poll_cb ...[2024-07-25 13:51:20.576558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:31.777 passed 00:08:31.777 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:31.777 Test: test_nvme_allocate_request_null ...passed 00:08:31.777 Test: test_nvme_allocate_request ...passed 00:08:31.777 Test: test_nvme_free_request ...passed 00:08:31.777 Test: test_nvme_allocate_request_user_copy ...passed 00:08:31.777 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:31.777 Test: test_nvme_request_check_timeout ...passed 00:08:31.777 Test: test_nvme_wait_for_completion ...passed 00:08:31.777 Test: test_spdk_nvme_parse_func ...passed 00:08:31.777 Test: test_spdk_nvme_detach_async ...passed 00:08:31.777 Test: test_nvme_parse_addr ...[2024-07-25 13:51:20.577139] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:31.777 passed 00:08:31.777 00:08:31.777 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.777 suites 1 1 n/a 0 0 00:08:31.777 tests 25 25 25 0 0 00:08:31.777 asserts 326 326 326 0 n/a 00:08:31.777 00:08:31.777 Elapsed time = 0.006 seconds 00:08:31.777 13:51:20 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:31.777 00:08:31.777 00:08:31.777 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.777 http://cunit.sourceforge.net/ 00:08:31.777 00:08:31.777 00:08:31.777 Suite: nvme_ctrlr 00:08:31.777 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-25 13:51:20.614333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-25 13:51:20.616232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-25 13:51:20.617552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-25 13:51:20.618842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-25 13:51:20.620160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 [2024-07-25 13:51:20.621378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 13:51:20.622643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 13:51:20.623844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-25 13:51:20.626268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 [2024-07-25 13:51:20.628583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 13:51:20.629922] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:31.777 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-25 13:51:20.632348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.777 [2024-07-25 13:51:20.633550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 13:51:20.635886] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:31.777 Test: test_nvme_ctrlr_init_delay ...[2024-07-25 13:51:20.638308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.778 passed 00:08:31.778 Test: test_alloc_io_qpair_rr_1 ...[2024-07-25 13:51:20.639592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.778 [2024-07-25 13:51:20.639801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:31.778 [2024-07-25 13:51:20.639962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:31.778 [2024-07-25 13:51:20.640042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:31.778 [2024-07-25 13:51:20.640099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:31.778 passed 00:08:31.778 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:31.778 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:31.778 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-25 13:51:20.640237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.778 passed 00:08:31.778 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-25 13:51:20.640439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.778 [2024-07-25 13:51:20.640594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:31.778 passed 00:08:31.778 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-25 13:51:20.640894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4997:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:31.778 [2024-07-25 13:51:20.641088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:31.778 [2024-07-25 13:51:20.641217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5074:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:31.778 [2024-07-25 13:51:20.641324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:31.778 passed 00:08:31.778 Test: test_nvme_ctrlr_fail ...[2024-07-25 13:51:20.641423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:31.778 passed 00:08:31.778 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:31.778 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:31.778 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-25 13:51:20.641588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:31.778 passed 00:08:31.778 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:31.778 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-25 13:51:20.642940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:32.036 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:32.036 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:32.036 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-25 13:51:20.905176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-25 13:51:20.912596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-25 13:51:20.913859] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 [2024-07-25 13:51:20.913954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3006:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:32.036 passed 00:08:32.036 Test: test_alloc_io_qpair_fail ...[2024-07-25 13:51:20.915210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:32.036 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-07-25 13:51:20.915372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_set_state ...passed 00:08:32.036 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-25 13:51:20.915493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1550:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:32.036 [2024-07-25 13:51:20.915544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-25 13:51:20.933573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-25 13:51:20.971409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_reset ...[2024-07-25 13:51:20.972934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.036 Test: test_nvme_ctrlr_aer_callback ...[2024-07-25 13:51:20.973333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.036 passed 00:08:32.037 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-25 13:51:20.974763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.037 passed 00:08:32.037 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:32.037 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:32.037 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-25 13:51:20.976502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.037 passed 00:08:32.037 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:32.037 Test: test_nvme_ctrlr_ana_resize ...[2024-07-25 13:51:20.977880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.037 passed 00:08:32.037 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:32.037 Test: test_nvme_transport_ctrlr_ready ...[2024-07-25 13:51:20.979409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4156:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:32.037 [2024-07-25 13:51:20.979475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4208:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:08:32.037 passed 00:08:32.037 Test: test_nvme_ctrlr_disable ...[2024-07-25 13:51:20.979524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:32.037 passed 00:08:32.037 00:08:32.037 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.037 suites 1 1 n/a 0 0 00:08:32.037 tests 44 44 44 0 0 00:08:32.037 asserts 10434 10434 10434 0 n/a 00:08:32.037 00:08:32.037 Elapsed time = 0.324 seconds 00:08:32.037 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:32.037 00:08:32.037 00:08:32.037 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.037 http://cunit.sourceforge.net/ 00:08:32.037 00:08:32.037 00:08:32.037 Suite: nvme_ctrlr_cmd 00:08:32.037 Test: test_get_log_pages ...passed 00:08:32.037 Test: test_set_feature_cmd ...passed 00:08:32.037 Test: test_set_feature_ns_cmd ...passed 00:08:32.037 Test: test_get_feature_cmd ...passed 00:08:32.037 Test: test_get_feature_ns_cmd ...passed 00:08:32.037 Test: test_abort_cmd ...passed 00:08:32.037 Test: test_set_host_id_cmds ...[2024-07-25 13:51:21.027702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:32.037 passed 00:08:32.037 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:32.037 Test: test_io_raw_cmd ...passed 00:08:32.037 Test: test_io_raw_cmd_with_md ...passed 00:08:32.037 Test: test_namespace_attach ...passed 00:08:32.037 Test: test_namespace_detach ...passed 00:08:32.037 Test: test_namespace_create ...passed 00:08:32.037 Test: test_namespace_delete ...passed 00:08:32.037 Test: test_doorbell_buffer_config ...passed 00:08:32.037 Test: test_format_nvme ...passed 00:08:32.037 Test: test_fw_commit ...passed 00:08:32.037 Test: test_fw_image_download ...passed 00:08:32.037 Test: test_sanitize ...passed 00:08:32.037 Test: test_directive ...passed 00:08:32.037 Test: test_nvme_request_add_abort ...passed 00:08:32.037 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:32.037 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:32.037 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:32.037 00:08:32.037 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.037 suites 1 1 n/a 0 0 00:08:32.037 tests 24 24 24 0 0 00:08:32.037 asserts 198 198 198 0 n/a 00:08:32.037 00:08:32.037 Elapsed time = 0.001 seconds 00:08:32.037 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:32.037 00:08:32.037 00:08:32.037 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.037 http://cunit.sourceforge.net/ 00:08:32.037 00:08:32.037 00:08:32.037 Suite: nvme_ctrlr_cmd 00:08:32.037 Test: test_geometry_cmd ...passed 00:08:32.037 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:32.037 00:08:32.037 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.037 suites 1 1 n/a 0 0 00:08:32.037 tests 2 2 2 0 0 00:08:32.037 asserts 7 7 7 0 n/a 00:08:32.037 00:08:32.037 Elapsed time = 0.000 seconds 00:08:32.296 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:32.296 00:08:32.296 00:08:32.296 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.296 http://cunit.sourceforge.net/ 00:08:32.296 00:08:32.296 00:08:32.296 Suite: nvme 00:08:32.296 Test: test_nvme_ns_construct ...passed 00:08:32.296 Test: test_nvme_ns_uuid ...passed 00:08:32.296 Test: test_nvme_ns_csi ...passed 00:08:32.296 Test: test_nvme_ns_data ...passed 00:08:32.296 Test: test_nvme_ns_set_identify_data ...passed 00:08:32.296 Test: test_spdk_nvme_ns_get_values ...passed 00:08:32.296 Test: test_spdk_nvme_ns_is_active ...passed 00:08:32.296 Test: spdk_nvme_ns_supports ...passed 00:08:32.296 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:32.296 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:32.296 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:32.296 Test: test_nvme_ns_find_id_desc ...passed 00:08:32.296 00:08:32.296 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.296 suites 1 1 n/a 0 0 00:08:32.296 tests 12 12 12 0 0 00:08:32.296 asserts 95 95 95 0 n/a 00:08:32.296 00:08:32.296 Elapsed time = 0.001 seconds 00:08:32.296 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:32.296 00:08:32.296 00:08:32.296 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.296 http://cunit.sourceforge.net/ 00:08:32.296 00:08:32.296 00:08:32.296 Suite: nvme_ns_cmd 00:08:32.296 Test: split_test ...passed 00:08:32.296 Test: split_test2 ...passed 00:08:32.296 Test: split_test3 ...passed 00:08:32.296 Test: split_test4 ...passed 00:08:32.296 Test: test_nvme_ns_cmd_flush ...passed 00:08:32.296 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:32.296 Test: test_nvme_ns_cmd_copy ...passed 00:08:32.296 Test: test_io_flags ...[2024-07-25 13:51:21.120790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:32.296 passed 00:08:32.296 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:32.296 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:32.296 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:32.296 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:32.296 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:32.296 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:32.297 Test: test_cmd_child_request ...passed 00:08:32.297 Test: test_nvme_ns_cmd_readv ...passed 00:08:32.297 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_writev ...[2024-07-25 13:51:21.122032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:32.297 passed 00:08:32.297 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_comparev ...passed 00:08:32.297 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:32.297 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:32.297 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:32.297 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:32.297 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-25 13:51:21.123933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:32.297 passed 00:08:32.297 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:08:32.297 Test: test_nvme_ns_cmd_verify ...[2024-07-25 13:51:21.124049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:32.297 passed 00:08:32.297 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:32.297 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:32.297 00:08:32.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.297 suites 1 1 n/a 0 0 00:08:32.297 tests 32 32 32 0 0 00:08:32.297 asserts 550 550 550 0 n/a 00:08:32.297 00:08:32.297 Elapsed time = 0.005 seconds 00:08:32.297 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:32.297 00:08:32.297 00:08:32.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.297 http://cunit.sourceforge.net/ 00:08:32.297 00:08:32.297 00:08:32.297 Suite: nvme_ns_cmd 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:32.297 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:32.297 00:08:32.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.297 suites 1 1 n/a 0 0 00:08:32.297 tests 12 12 12 0 0 00:08:32.297 asserts 123 123 123 0 n/a 00:08:32.297 00:08:32.297 Elapsed time = 0.001 seconds 00:08:32.297 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:32.297 00:08:32.297 00:08:32.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.297 http://cunit.sourceforge.net/ 00:08:32.297 00:08:32.297 00:08:32.297 Suite: nvme_qpair 00:08:32.297 Test: test3 ...passed 00:08:32.297 Test: test_ctrlr_failed ...passed 00:08:32.297 Test: struct_packing ...passed 00:08:32.297 Test: test_nvme_qpair_process_completions ...[2024-07-25 13:51:21.186942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:32.297 [2024-07-25 13:51:21.187268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:32.297 [2024-07-25 13:51:21.187336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:32.297 passed 00:08:32.297 Test: test_nvme_completion_is_retry ...passed 00:08:32.297 Test: test_get_status_string ...passed 00:08:32.297 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-07-25 13:51:21.187419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:32.297 passed 00:08:32.297 Test: test_nvme_qpair_submit_request ...passed 00:08:32.297 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:32.297 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:32.297 Test: test_nvme_qpair_init_deinit ...[2024-07-25 13:51:21.187874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:32.297 passed 00:08:32.297 Test: test_nvme_get_sgl_print_info ...passed 00:08:32.297 00:08:32.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.297 suites 1 1 n/a 0 0 00:08:32.297 tests 12 12 12 0 0 00:08:32.297 asserts 154 154 154 0 n/a 00:08:32.297 00:08:32.297 Elapsed time = 0.001 seconds 00:08:32.297 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:32.297 00:08:32.297 00:08:32.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.297 http://cunit.sourceforge.net/ 00:08:32.297 00:08:32.297 00:08:32.297 Suite: nvme_pcie 00:08:32.297 Test: test_prp_list_append ...[2024-07-25 13:51:21.213226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:32.297 [2024-07-25 13:51:21.213546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:32.297 [2024-07-25 13:51:21.213617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:32.297 [2024-07-25 13:51:21.213908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:32.297 passed 00:08:32.297 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-25 13:51:21.214020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:32.297 passed 00:08:32.297 Test: test_shadow_doorbell_update ...passed 00:08:32.297 Test: test_build_contig_hw_sgl_request ...passed 00:08:32.297 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:32.297 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:32.297 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:32.297 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-25 13:51:21.214202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:32.297 passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-25 13:51:21.214298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:32.297 [2024-07-25 13:51:21.214385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:32.297 passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:32.297 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-25 13:51:21.214442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:32.297 [2024-07-25 13:51:21.214495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:32.297 passed 00:08:32.297 00:08:32.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.297 suites 1 1 n/a 0 0 00:08:32.297 tests 14 14 14 0 0 00:08:32.297 asserts 235 235 235 0 n/a 00:08:32.297 00:08:32.297 Elapsed time = 0.001 seconds 00:08:32.297 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:32.297 00:08:32.297 00:08:32.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.297 http://cunit.sourceforge.net/ 00:08:32.297 00:08:32.297 00:08:32.297 Suite: nvme_ns_cmd 00:08:32.297 Test: nvme_poll_group_create_test ...passed 00:08:32.297 Test: nvme_poll_group_add_remove_test ...passed 00:08:32.297 Test: nvme_poll_group_process_completions ...passed 00:08:32.297 Test: nvme_poll_group_destroy_test ...passed 00:08:32.297 Test: nvme_poll_group_get_free_stats ...passed 00:08:32.297 00:08:32.297 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.297 suites 1 1 n/a 0 0 00:08:32.297 tests 5 5 5 0 0 00:08:32.297 asserts 75 75 75 0 n/a 00:08:32.297 00:08:32.297 Elapsed time = 0.000 seconds 00:08:32.297 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:32.297 00:08:32.297 00:08:32.297 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.297 http://cunit.sourceforge.net/ 00:08:32.297 00:08:32.297 00:08:32.297 Suite: nvme_quirks 00:08:32.297 Test: test_nvme_quirks_striping ...passed 00:08:32.297 00:08:32.298 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.298 suites 1 1 n/a 0 0 00:08:32.298 tests 1 1 1 0 0 00:08:32.298 asserts 5 5 5 0 n/a 00:08:32.298 00:08:32.298 Elapsed time = 0.000 seconds 00:08:32.298 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:32.298 00:08:32.298 00:08:32.298 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.298 http://cunit.sourceforge.net/ 00:08:32.298 00:08:32.298 00:08:32.298 Suite: nvme_tcp 00:08:32.298 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:32.298 Test: test_nvme_tcp_build_iovs ...passed 00:08:32.298 Test: test_nvme_tcp_build_sgl_request ...[2024-07-25 13:51:21.296681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff9158eae0, and the iovcnt=16, remaining_size=28672 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:32.298 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:32.298 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:32.298 Test: test_nvme_tcp_req_get ...passed 00:08:32.298 Test: test_nvme_tcp_req_init ...passed 00:08:32.298 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:32.298 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:32.298 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:32.298 Test: test_nvme_tcp_alloc_reqs ...[2024-07-25 13:51:21.297205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff91590820 is same with the state(6) to be set 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-25 13:51:21.297488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158f9d0 is same with the state(5) to be set 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-25 13:51:21.297552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff91590560 00:08:32.298 [2024-07-25 13:51:21.297603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:32.298 [2024-07-25 13:51:21.297680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.297731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:32.298 [2024-07-25 13:51:21.297823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.297868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:32.298 [2024-07-25 13:51:21.297901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.297944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.297990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.298049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-25 13:51:21.298085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.298142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158fe90 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.298312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:32.298 [2024-07-25 13:51:21.298360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:32.298 [2024-07-25 13:51:21.298578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:32.298 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:08:32.298 Test: test_nvme_tcp_icresp_handle ...[2024-07-25 13:51:21.298714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff915900a0): PDU Sequence Error 00:08:32.298 [2024-07-25 13:51:21.298777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:32.298 [2024-07-25 13:51:21.298824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:32.298 [2024-07-25 13:51:21.298862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158f9e0 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.298901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:32.298 [2024-07-25 13:51:21.298938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158f9e0 is same with the state(5) to be set 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:32.298 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-25 13:51:21.298992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158f9e0 is same with the state(0) to be set 00:08:32.298 [2024-07-25 13:51:21.299046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff91590560): PDU Sequence Error 00:08:32.298 [2024-07-25 13:51:21.299129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff9158eca0 00:08:32.298 passed 00:08:32.298 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:32.298 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-25 13:51:21.299274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff9158e320, errno=0, rc=0 00:08:32.298 [2024-07-25 13:51:21.299347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158e320 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.299408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff9158e320 is same with the state(5) to be set 00:08:32.298 [2024-07-25 13:51:21.299458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff9158e320 (0): Success 00:08:32.298 [2024-07-25 13:51:21.299499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff9158e320 (0): Success 00:08:32.298 passed 00:08:32.556 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-25 13:51:21.409952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:32.556 [2024-07-25 13:51:21.410054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:32.556 passed 00:08:32.556 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:32.556 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-25 13:51:21.410345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:32.556 [2024-07-25 13:51:21.410390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:32.556 passed 00:08:32.556 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-25 13:51:21.410579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:32.556 [2024-07-25 13:51:21.410630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:32.556 [2024-07-25 13:51:21.410735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:32.556 [2024-07-25 13:51:21.410790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:32.556 [2024-07-25 13:51:21.410888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:32.556 [2024-07-25 13:51:21.410953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:32.556 passed 00:08:32.557 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-25 13:51:21.411090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:08:32.557 passed 00:08:32.557 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 27 27 27 0 0 00:08:32.557 asserts 624 624 624 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.115 seconds 00:08:32.557 [2024-07-25 13:51:21.411132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:32.557 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:32.557 00:08:32.557 00:08:32.557 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.557 http://cunit.sourceforge.net/ 00:08:32.557 00:08:32.557 00:08:32.557 Suite: nvme_transport 00:08:32.557 Test: test_nvme_get_transport ...passed 00:08:32.557 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:32.557 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:32.557 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:32.557 Test: test_ctrlr_get_memory_domains ...passed 00:08:32.557 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 5 5 5 0 0 00:08:32.557 asserts 28 28 28 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.000 seconds 00:08:32.557 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:32.557 00:08:32.557 00:08:32.557 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.557 http://cunit.sourceforge.net/ 00:08:32.557 00:08:32.557 00:08:32.557 Suite: nvme_io_msg 00:08:32.557 Test: test_nvme_io_msg_send ...passed 00:08:32.557 Test: test_nvme_io_msg_process ...passed 00:08:32.557 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:32.557 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 3 3 3 0 0 00:08:32.557 asserts 56 56 56 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.000 seconds 00:08:32.557 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:32.557 00:08:32.557 00:08:32.557 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.557 http://cunit.sourceforge.net/ 00:08:32.557 00:08:32.557 00:08:32.557 Suite: nvme_pcie_common 00:08:32.557 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-25 13:51:21.515416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:32.557 passed 00:08:32.557 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:32.557 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:32.557 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-25 13:51:21.516133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:32.557 [2024-07-25 13:51:21.516265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:32.557 passed 00:08:32.557 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-25 13:51:21.516316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:32.557 passed 00:08:32.557 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-25 13:51:21.516716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:32.557 passed 00:08:32.557 00:08:32.557 [2024-07-25 13:51:21.516775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 6 6 6 0 0 00:08:32.557 asserts 148 148 148 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.001 seconds 00:08:32.557 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:32.557 00:08:32.557 00:08:32.557 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.557 http://cunit.sourceforge.net/ 00:08:32.557 00:08:32.557 00:08:32.557 Suite: nvme_fabric 00:08:32.557 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:32.557 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:32.557 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:32.557 Test: test_nvme_fabric_discover_probe ...passed 00:08:32.557 Test: test_nvme_fabric_qpair_connect ...[2024-07-25 13:51:21.550197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:32.557 passed 00:08:32.557 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 5 5 5 0 0 00:08:32.557 asserts 60 60 60 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.001 seconds 00:08:32.557 13:51:21 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:32.557 00:08:32.557 00:08:32.557 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.557 http://cunit.sourceforge.net/ 00:08:32.557 00:08:32.557 00:08:32.557 Suite: nvme_opal 00:08:32.557 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:32.557 Test: test_opal_add_short_atom_header ...[2024-07-25 13:51:21.585924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:32.557 passed 00:08:32.557 00:08:32.557 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.557 suites 1 1 n/a 0 0 00:08:32.557 tests 2 2 2 0 0 00:08:32.557 asserts 22 22 22 0 n/a 00:08:32.557 00:08:32.557 Elapsed time = 0.000 seconds 00:08:32.815 00:08:32.815 real 0m1.155s 00:08:32.815 user 0m0.574s 00:08:32.815 sys 0m0.438s 00:08:32.815 13:51:21 unittest.unittest_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.815 13:51:21 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.815 ************************************ 00:08:32.815 END TEST unittest_nvme 00:08:32.815 ************************************ 00:08:32.815 13:51:21 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:32.815 13:51:21 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.815 13:51:21 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.815 13:51:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:32.815 ************************************ 00:08:32.815 START TEST unittest_log 00:08:32.815 ************************************ 00:08:32.815 13:51:21 unittest.unittest_log -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:32.815 00:08:32.815 00:08:32.815 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.815 http://cunit.sourceforge.net/ 00:08:32.815 00:08:32.815 00:08:32.815 Suite: log 00:08:32.815 Test: log_test ...[2024-07-25 13:51:21.672024] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:32.815 [2024-07-25 13:51:21.672986] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:32.815 passed 00:08:32.815 Test: deprecation ...log dump test: 00:08:32.815 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:32.815 spdk dump test: 00:08:32.815 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:32.815 spdk dump test: 00:08:32.815 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:32.815 00000010 65 20 63 68 61 72 73 e chars 00:08:33.749 passed 00:08:33.749 00:08:33.749 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.749 suites 1 1 n/a 0 0 00:08:33.749 tests 2 2 2 0 0 00:08:33.749 asserts 73 73 73 0 n/a 00:08:33.749 00:08:33.749 Elapsed time = 0.001 seconds 00:08:33.749 00:08:33.749 real 0m1.037s 00:08:33.749 user 0m0.019s 00:08:33.749 sys 0m0.017s 00:08:33.749 13:51:22 unittest.unittest_log -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.749 ************************************ 00:08:33.749 END TEST unittest_log 00:08:33.749 ************************************ 00:08:33.749 13:51:22 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:33.749 13:51:22 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:33.749 13:51:22 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.749 13:51:22 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.749 13:51:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:33.749 ************************************ 00:08:33.749 START TEST unittest_lvol 00:08:33.749 ************************************ 00:08:33.749 13:51:22 unittest.unittest_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:33.749 00:08:33.749 00:08:33.749 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.749 http://cunit.sourceforge.net/ 00:08:33.749 00:08:33.749 00:08:33.749 Suite: lvol 00:08:33.749 Test: lvs_init_unload_success ...[2024-07-25 13:51:22.763412] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:33.749 passed 00:08:33.749 Test: lvs_init_destroy_success ...[2024-07-25 13:51:22.763955] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:33.749 passed 00:08:33.749 Test: lvs_init_opts_success ...passed 00:08:33.749 Test: lvs_unload_lvs_is_null_fail ...[2024-07-25 13:51:22.764196] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:33.749 passed 00:08:33.749 Test: lvs_names ...[2024-07-25 13:51:22.764282] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:33.749 [2024-07-25 13:51:22.764368] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:33.749 [2024-07-25 13:51:22.764631] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:33.749 passed 00:08:33.749 Test: lvol_create_destroy_success ...passed 00:08:33.749 Test: lvol_create_fail ...[2024-07-25 13:51:22.765241] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:33.749 passed 00:08:33.749 Test: lvol_destroy_fail ...[2024-07-25 13:51:22.765376] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:33.749 [2024-07-25 13:51:22.765676] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:33.750 passed 00:08:33.750 Test: lvol_close ...[2024-07-25 13:51:22.765943] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:33.750 [2024-07-25 13:51:22.766015] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:33.750 passed 00:08:33.750 Test: lvol_resize ...passed 00:08:33.750 Test: lvol_set_read_only ...passed 00:08:33.750 Test: test_lvs_load ...[2024-07-25 13:51:22.766802] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:33.750 passed 00:08:33.750 Test: lvols_load ...[2024-07-25 13:51:22.766860] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:33.750 [2024-07-25 13:51:22.767099] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:33.750 passed 00:08:33.750 Test: lvol_open ...[2024-07-25 13:51:22.767211] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:33.750 passed 00:08:33.750 Test: lvol_snapshot ...passed 00:08:33.750 Test: lvol_snapshot_fail ...[2024-07-25 13:51:22.767927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:33.750 passed 00:08:33.750 Test: lvol_clone ...passed 00:08:33.750 Test: lvol_clone_fail ...[2024-07-25 13:51:22.768523] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:33.750 passed 00:08:33.750 Test: lvol_iter_clones ...passed 00:08:33.750 Test: lvol_refcnt ...[2024-07-25 13:51:22.769027] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol ed436cab-c13d-4c39-8a93-6730e8bf3a8e because it is still open 00:08:33.750 passed 00:08:33.750 Test: lvol_names ...[2024-07-25 13:51:22.769265] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:33.750 [2024-07-25 13:51:22.769354] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:33.750 [2024-07-25 13:51:22.769563] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:33.750 passed 00:08:33.750 Test: lvol_create_thin_provisioned ...passed 00:08:33.750 Test: lvol_rename ...[2024-07-25 13:51:22.770053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:33.750 [2024-07-25 13:51:22.770166] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:33.750 passed 00:08:33.750 Test: lvs_rename ...[2024-07-25 13:51:22.770408] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:33.750 passed 00:08:33.750 Test: lvol_inflate ...[2024-07-25 13:51:22.770600] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:33.750 passed 00:08:33.750 Test: lvol_decouple_parent ...[2024-07-25 13:51:22.770853] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:33.750 passed 00:08:33.750 Test: lvol_get_xattr ...passed 00:08:33.750 Test: lvol_esnap_reload ...passed 00:08:33.750 Test: lvol_esnap_create_bad_args ...[2024-07-25 13:51:22.771311] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:33.750 [2024-07-25 13:51:22.771354] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:33.750 [2024-07-25 13:51:22.771405] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:33.750 [2024-07-25 13:51:22.771564] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:33.750 [2024-07-25 13:51:22.771768] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:33.750 passed 00:08:33.750 Test: lvol_esnap_create_delete ...passed 00:08:33.750 Test: lvol_esnap_load_esnaps ...[2024-07-25 13:51:22.772266] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:33.750 passed 00:08:33.750 Test: lvol_esnap_missing ...[2024-07-25 13:51:22.772489] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:33.750 [2024-07-25 13:51:22.772593] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:33.750 passed 00:08:33.750 Test: lvol_esnap_hotplug ... 00:08:33.750 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:33.750 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:33.750 [2024-07-25 13:51:22.773463] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 716c9b8a-36dd-4a06-94ed-8211406bbd47: failed to create esnap bs_dev: error -12 00:08:33.750 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:33.750 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:33.750 [2024-07-25 13:51:22.773815] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5632f6ac-788f-4251-9eae-90a1c3a42625: failed to create esnap bs_dev: error -12 00:08:33.750 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:33.750 [2024-07-25 13:51:22.774028] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5c7743a0-7579-4517-b5b3-6d078fc876a3: failed to create esnap bs_dev: error -12 00:08:33.750 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:33.750 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:33.750 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:33.750 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:33.750 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:33.750 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:33.750 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:33.750 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:33.750 passed 00:08:33.750 Test: lvol_get_by ...passed 00:08:33.750 Test: lvol_shallow_copy ...[2024-07-25 13:51:22.775217] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:33.750 [2024-07-25 13:51:22.775277] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol de34b3e5-21e8-468a-9df3-685b1e189690 shallow copy, ext_dev must not be NULL 00:08:33.750 passed 00:08:33.750 Test: lvol_set_parent ...[2024-07-25 13:51:22.775548] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:33.750 [2024-07-25 13:51:22.775600] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:33.750 passed 00:08:33.750 Test: lvol_set_external_parent ...[2024-07-25 13:51:22.775820] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:33.750 [2024-07-25 13:51:22.775882] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:33.750 [2024-07-25 13:51:22.775945] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:33.750 passed 00:08:33.750 00:08:33.750 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.750 suites 1 1 n/a 0 0 00:08:33.750 tests 37 37 37 0 0 00:08:33.750 asserts 1505 1505 1505 0 n/a 00:08:33.750 00:08:33.750 Elapsed time = 0.013 seconds 00:08:34.009 00:08:34.009 real 0m0.046s 00:08:34.009 user 0m0.017s 00:08:34.009 sys 0m0.029s 00:08:34.009 13:51:22 unittest.unittest_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.009 13:51:22 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.009 ************************************ 00:08:34.009 END TEST unittest_lvol 00:08:34.009 ************************************ 00:08:34.009 13:51:22 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:34.009 13:51:22 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:34.009 ************************************ 00:08:34.009 START TEST unittest_nvme_rdma 00:08:34.009 ************************************ 00:08:34.009 13:51:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:34.009 00:08:34.009 00:08:34.009 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.009 http://cunit.sourceforge.net/ 00:08:34.009 00:08:34.009 00:08:34.009 Suite: nvme_rdma 00:08:34.009 Test: test_nvme_rdma_build_sgl_request ...[2024-07-25 13:51:22.861674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:34.009 [2024-07-25 13:51:22.862043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_build_sgl_inline_request ...passed[2024-07-25 13:51:22.862163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:34.009 00:08:34.009 Test: test_nvme_rdma_build_contig_request ...passed 00:08:34.009 Test: test_nvme_rdma_build_contig_inline_request ...[2024-07-25 13:51:22.862230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_create_reqs ...[2024-07-25 13:51:22.862351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_create_rsps ...[2024-07-25 13:51:22.862696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-25 13:51:22.862882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_poller_create ...[2024-07-25 13:51:22.862954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:34.009 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-25 13:51:22.863177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:34.009 Test: test_nvme_rdma_req_init ...passed 00:08:34.009 Test: test_nvme_rdma_validate_cm_event ...[2024-07-25 13:51:22.863492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_qpair_init ...passed 00:08:34.009 Test: test_nvme_rdma_qpair_submit_request ...[2024-07-25 13:51:22.863544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:34.009 passed 00:08:34.009 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:34.009 Test: test_rdma_get_memory_translation ...[2024-07-25 13:51:22.863678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:34.009 [2024-07-25 13:51:22.863728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:34.009 passed 00:08:34.009 Test: test_get_rdma_qpair_from_wc ...passed 00:08:34.009 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:34.009 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-25 13:51:22.863839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:34.009 [2024-07-25 13:51:22.863880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:34.009 passed 00:08:34.009 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-25 13:51:22.864051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:34.009 [2024-07-25 13:51:22.864098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:34.009 [2024-07-25 13:51:22.864143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff7bfcb3d0 on poll group 0x60c000000040 00:08:34.009 [2024-07-25 13:51:22.864186] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:34.009 [2024-07-25 13:51:22.864248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:34.009 [2024-07-25 13:51:22.864291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff7bfcb3d0 on poll group 0x60c000000040 00:08:34.009 passed 00:08:34.009 00:08:34.009 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.009 suites 1 1 n/a 0 0 00:08:34.009 tests 21 21 21 0 0 00:08:34.009 asserts 397 397 397 0 n/a 00:08:34.009 00:08:34.009 Elapsed time = 0.003 seconds 00:08:34.009 [2024-07-25 13:51:22.864376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:34.009 00:08:34.009 real 0m0.036s 00:08:34.009 user 0m0.028s 00:08:34.009 sys 0m0.008s 00:08:34.009 13:51:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.009 13:51:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:34.009 ************************************ 00:08:34.009 END TEST unittest_nvme_rdma 00:08:34.009 ************************************ 00:08:34.009 13:51:22 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.009 13:51:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:34.009 ************************************ 00:08:34.009 START TEST unittest_nvmf_transport 00:08:34.009 ************************************ 00:08:34.009 13:51:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:34.009 00:08:34.009 00:08:34.009 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.009 http://cunit.sourceforge.net/ 00:08:34.009 00:08:34.009 00:08:34.009 Suite: nvmf 00:08:34.009 Test: test_spdk_nvmf_transport_create ...[2024-07-25 13:51:22.952699] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:34.009 [2024-07-25 13:51:22.953505] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:34.009 [2024-07-25 13:51:22.953741] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:34.009 [2024-07-25 13:51:22.954070] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:34.009 passed 00:08:34.009 Test: test_nvmf_transport_poll_group_create ...passed 00:08:34.009 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-25 13:51:22.954528] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:34.009 [2024-07-25 13:51:22.954757] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:34.010 [2024-07-25 13:51:22.954938] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:34.010 passed 00:08:34.010 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:34.010 00:08:34.010 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.010 suites 1 1 n/a 0 0 00:08:34.010 tests 4 4 4 0 0 00:08:34.010 asserts 49 49 49 0 n/a 00:08:34.010 00:08:34.010 Elapsed time = 0.002 seconds 00:08:34.010 00:08:34.010 real 0m0.042s 00:08:34.010 user 0m0.036s 00:08:34.010 sys 0m0.005s 00:08:34.010 13:51:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.010 13:51:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 ************************************ 00:08:34.010 END TEST unittest_nvmf_transport 00:08:34.010 ************************************ 00:08:34.010 13:51:23 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:34.010 13:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.010 13:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.010 13:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 ************************************ 00:08:34.010 START TEST unittest_rdma 00:08:34.010 ************************************ 00:08:34.010 13:51:23 unittest.unittest_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:34.010 00:08:34.010 00:08:34.010 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.010 http://cunit.sourceforge.net/ 00:08:34.010 00:08:34.010 00:08:34.010 Suite: rdma_common 00:08:34.010 Test: test_spdk_rdma_pd ...[2024-07-25 13:51:23.035233] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:34.010 [2024-07-25 13:51:23.035684] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:08:34.010 passed 00:08:34.010 00:08:34.010 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.010 suites 1 1 n/a 0 0 00:08:34.010 tests 1 1 1 0 0 00:08:34.010 asserts 31 31 31 0 n/a 00:08:34.010 00:08:34.010 Elapsed time = 0.001 seconds 00:08:34.268 00:08:34.268 real 0m0.033s 00:08:34.268 user 0m0.024s 00:08:34.268 sys 0m0.010s 00:08:34.268 13:51:23 unittest.unittest_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.268 13:51:23 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:34.268 ************************************ 00:08:34.268 END TEST unittest_rdma 00:08:34.268 ************************************ 00:08:34.268 13:51:23 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:34.268 13:51:23 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:34.268 13:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.268 13:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.268 13:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:34.268 ************************************ 00:08:34.268 START TEST unittest_nvme_cuse 00:08:34.268 ************************************ 00:08:34.268 13:51:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:34.268 00:08:34.268 00:08:34.268 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.268 http://cunit.sourceforge.net/ 00:08:34.268 00:08:34.268 00:08:34.268 Suite: nvme_cuse 00:08:34.268 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:34.268 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:34.268 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:34.268 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:34.268 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:34.268 Test: test_cuse_nvme_submit_io ...[2024-07-25 13:51:23.127770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:34.268 passed 00:08:34.268 Test: test_cuse_nvme_reset ...[2024-07-25 13:51:23.128398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:34.268 passed 00:08:34.859 Test: test_nvme_cuse_stop ...passed 00:08:34.859 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:34.859 00:08:34.859 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.859 suites 1 1 n/a 0 0 00:08:34.859 tests 9 9 9 0 0 00:08:34.859 asserts 118 118 118 0 n/a 00:08:34.859 00:08:34.859 Elapsed time = 0.504 seconds 00:08:34.859 ************************************ 00:08:34.859 END TEST unittest_nvme_cuse 00:08:34.859 ************************************ 00:08:34.859 00:08:34.859 real 0m0.536s 00:08:34.859 user 0m0.279s 00:08:34.859 sys 0m0.256s 00:08:34.859 13:51:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.859 13:51:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:34.859 13:51:23 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:34.859 13:51:23 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.859 13:51:23 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.859 13:51:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:34.859 ************************************ 00:08:34.859 START TEST unittest_nvmf 00:08:34.859 ************************************ 00:08:34.859 13:51:23 unittest.unittest_nvmf -- common/autotest_common.sh@1125 -- # unittest_nvmf 00:08:34.859 13:51:23 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:34.859 00:08:34.859 00:08:34.859 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.859 http://cunit.sourceforge.net/ 00:08:34.859 00:08:34.859 00:08:34.859 Suite: nvmf 00:08:34.859 Test: test_get_log_page ...[2024-07-25 13:51:23.719055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:34.859 passed 00:08:34.859 Test: test_process_fabrics_cmd ...[2024-07-25 13:51:23.719826] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:34.859 passed 00:08:34.859 Test: test_connect ...[2024-07-25 13:51:23.720889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:34.859 [2024-07-25 13:51:23.721206] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:34.859 [2024-07-25 13:51:23.721400] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:34.859 [2024-07-25 13:51:23.721584] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:34.859 [2024-07-25 13:51:23.721891] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:34.859 [2024-07-25 13:51:23.722108] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:34.859 [2024-07-25 13:51:23.722323] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:34.859 [2024-07-25 13:51:23.722518] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:34.859 [2024-07-25 13:51:23.722812] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:34.859 [2024-07-25 13:51:23.723065] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:34.859 [2024-07-25 13:51:23.723661] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:34.859 [2024-07-25 13:51:23.724038] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:34.859 [2024-07-25 13:51:23.724292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:34.859 [2024-07-25 13:51:23.724558] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:34.859 [2024-07-25 13:51:23.724841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:08:34.859 [2024-07-25 13:51:23.725157] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:34.859 [2024-07-25 13:51:23.725378] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:34.859 passed 00:08:34.859 Test: test_get_ns_id_desc_list ...passed 00:08:34.859 Test: test_identify_ns ...[2024-07-25 13:51:23.726211] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:34.859 [2024-07-25 13:51:23.726582] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:34.859 [2024-07-25 13:51:23.726808] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:34.859 passed 00:08:34.859 Test: test_identify_ns_iocs_specific ...[2024-07-25 13:51:23.727189] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:34.859 [2024-07-25 13:51:23.727523] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:34.859 passed 00:08:34.859 Test: test_reservation_write_exclusive ...passed 00:08:34.859 Test: test_reservation_exclusive_access ...passed 00:08:34.859 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:34.859 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:34.859 Test: test_reservation_notification_log_page ...passed 00:08:34.859 Test: test_get_dif_ctx ...passed 00:08:34.859 Test: test_set_get_features ...[2024-07-25 13:51:23.729896] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:34.859 [2024-07-25 13:51:23.730152] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:34.859 [2024-07-25 13:51:23.730385] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:34.859 [2024-07-25 13:51:23.730562] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:34.859 passed 00:08:34.859 Test: test_identify_ctrlr ...passed 00:08:34.859 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:34.859 Test: test_custom_admin_cmd ...passed 00:08:34.859 Test: test_fused_compare_and_write ...[2024-07-25 13:51:23.732180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:34.859 [2024-07-25 13:51:23.732452] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:34.859 [2024-07-25 13:51:23.732689] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:34.859 passed 00:08:34.859 Test: test_multi_async_event_reqs ...passed 00:08:34.859 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:34.859 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:34.859 Test: test_multi_async_events ...passed 00:08:34.859 Test: test_rae ...passed 00:08:34.859 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:34.859 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:34.859 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-25 13:51:23.735100] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:34.859 [2024-07-25 13:51:23.735352] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:34.859 passed 00:08:34.859 Test: test_zcopy_read ...passed 00:08:34.859 Test: test_zcopy_write ...passed 00:08:34.859 Test: test_nvmf_property_set ...passed 00:08:34.859 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-25 13:51:23.736542] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:34.859 [2024-07-25 13:51:23.736735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:34.860 passed 00:08:34.860 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-25 13:51:23.737178] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:34.860 [2024-07-25 13:51:23.737380] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:34.860 [2024-07-25 13:51:23.737601] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:34.860 [2024-07-25 13:51:23.737815] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:34.860 passed 00:08:34.860 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:34.860 Test: test_nvmf_check_qpair_active ...[2024-07-25 13:51:23.738497] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:34.860 [2024-07-25 13:51:23.738695] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4755:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:34.860 [2024-07-25 13:51:23.738885] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:34.860 [2024-07-25 13:51:23.739074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:34.860 [2024-07-25 13:51:23.739277] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:34.860 passed 00:08:34.860 00:08:34.860 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.860 suites 1 1 n/a 0 0 00:08:34.860 tests 32 32 32 0 0 00:08:34.860 asserts 983 983 983 0 n/a 00:08:34.860 00:08:34.860 Elapsed time = 0.010 seconds 00:08:34.860 13:51:23 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:34.860 00:08:34.860 00:08:34.860 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.860 http://cunit.sourceforge.net/ 00:08:34.860 00:08:34.860 00:08:34.860 Suite: nvmf 00:08:34.860 Test: test_get_rw_params ...passed 00:08:34.860 Test: test_get_rw_ext_params ...passed 00:08:34.860 Test: test_lba_in_range ...passed 00:08:34.860 Test: test_get_dif_ctx ...passed 00:08:34.860 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:34.860 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-25 13:51:23.772924] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:34.860 [2024-07-25 13:51:23.773393] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:34.860 [2024-07-25 13:51:23.773632] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:34.860 passed 00:08:34.860 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-25 13:51:23.774008] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:34.860 [2024-07-25 13:51:23.774144] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:34.860 passed 00:08:34.860 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-25 13:51:23.774574] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:34.860 [2024-07-25 13:51:23.774747] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:34.860 [2024-07-25 13:51:23.774952] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:34.860 [2024-07-25 13:51:23.775120] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:34.860 passed 00:08:34.860 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:34.860 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:34.860 00:08:34.860 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.860 suites 1 1 n/a 0 0 00:08:34.860 tests 10 10 10 0 0 00:08:34.860 asserts 159 159 159 0 n/a 00:08:34.860 00:08:34.860 Elapsed time = 0.002 seconds 00:08:34.860 13:51:23 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:34.860 00:08:34.860 00:08:34.860 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.860 http://cunit.sourceforge.net/ 00:08:34.860 00:08:34.860 00:08:34.860 Suite: nvmf 00:08:34.860 Test: test_discovery_log ...passed 00:08:34.860 Test: test_discovery_log_with_filters ...passed 00:08:34.860 00:08:34.860 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.860 suites 1 1 n/a 0 0 00:08:34.860 tests 2 2 2 0 0 00:08:34.860 asserts 238 238 238 0 n/a 00:08:34.860 00:08:34.860 Elapsed time = 0.003 seconds 00:08:34.860 13:51:23 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:34.860 00:08:34.860 00:08:34.860 CUnit - A unit testing framework for C - Version 2.1-3 00:08:34.860 http://cunit.sourceforge.net/ 00:08:34.860 00:08:34.860 00:08:34.860 Suite: nvmf 00:08:34.860 Test: nvmf_test_create_subsystem ...[2024-07-25 13:51:23.844179] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:34.860 [2024-07-25 13:51:23.844549] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:34.860 [2024-07-25 13:51:23.844802] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:34.860 [2024-07-25 13:51:23.844997] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:34.860 [2024-07-25 13:51:23.845131] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:34.860 [2024-07-25 13:51:23.845299] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:34.860 [2024-07-25 13:51:23.845529] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:34.860 [2024-07-25 13:51:23.845691] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:34.860 [2024-07-25 13:51:23.845932] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:34.860 [2024-07-25 13:51:23.846066] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:34.860 [2024-07-25 13:51:23.846141] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:34.860 [2024-07-25 13:51:23.846345] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:34.860 [2024-07-25 13:51:23.846490] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:34.860 [2024-07-25 13:51:23.846687] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:34.860 [2024-07-25 13:51:23.846894] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:34.860 [2024-07-25 13:51:23.847026] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:34.860 [2024-07-25 13:51:23.847158] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:34.860 [2024-07-25 13:51:23.847287] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:34.860 [2024-07-25 13:51:23.847427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:34.860 [2024-07-25 13:51:23.847604] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:34.860 [2024-07-25 13:51:23.847751] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:34.860 [2024-07-25 13:51:23.847891] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:34.860 passed 00:08:34.861 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-25 13:51:23.848174] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:34.861 [2024-07-25 13:51:23.848319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:34.861 passed 00:08:34.861 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-25 13:51:23.848832] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2161:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:34.861 passed 00:08:34.861 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:34.861 Test: test_spdk_nvmf_ns_visible ...[2024-07-25 13:51:23.849348] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:34.861 passed 00:08:34.861 Test: test_reservation_register ...[2024-07-25 13:51:23.850019] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 [2024-07-25 13:51:23.850248] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:34.861 passed 00:08:34.861 Test: test_reservation_register_with_ptpl ...passed 00:08:34.861 Test: test_reservation_acquire_preempt_1 ...[2024-07-25 13:51:23.851651] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:34.861 Test: test_reservation_release ...[2024-07-25 13:51:23.853550] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_unregister_notification ...[2024-07-25 13:51:23.854051] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_release_notification ...[2024-07-25 13:51:23.854524] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_release_notification_write_exclusive ...[2024-07-25 13:51:23.855026] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_clear_notification ...[2024-07-25 13:51:23.855500] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_reservation_preempt_notification ...[2024-07-25 13:51:23.856015] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:34.861 passed 00:08:34.861 Test: test_spdk_nvmf_ns_event ...passed 00:08:34.861 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:34.861 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:34.861 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-25 13:51:23.857353] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:34.861 [2024-07-25 13:51:23.857539] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:34.861 passed 00:08:34.861 Test: test_nvmf_ns_reservation_report ...[2024-07-25 13:51:23.857918] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3469:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:34.861 passed 00:08:34.861 Test: test_nvmf_nqn_is_valid ...[2024-07-25 13:51:23.858275] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:34.861 [2024-07-25 13:51:23.858471] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a1c03c53-4e1d-449a-9b49-34a8ab9f969": uuid is not the correct length 00:08:34.861 [2024-07-25 13:51:23.858603] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:34.861 passed 00:08:34.861 Test: test_nvmf_ns_reservation_restore ...[2024-07-25 13:51:23.858817] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:34.861 passed 00:08:34.861 Test: test_nvmf_subsystem_state_change ...passed 00:08:34.861 Test: test_nvmf_reservation_custom_ops ...passed 00:08:34.861 00:08:34.861 Run Summary: Type Total Ran Passed Failed Inactive 00:08:34.861 suites 1 1 n/a 0 0 00:08:34.861 tests 24 24 24 0 0 00:08:34.861 asserts 499 499 499 0 n/a 00:08:34.861 00:08:34.861 Elapsed time = 0.009 seconds 00:08:34.861 13:51:23 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:35.119 00:08:35.119 00:08:35.119 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.119 http://cunit.sourceforge.net/ 00:08:35.119 00:08:35.119 00:08:35.119 Suite: nvmf 00:08:35.119 Test: test_nvmf_tcp_create ...[2024-07-25 13:51:23.919936] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:35.119 passed 00:08:35.119 Test: test_nvmf_tcp_destroy ...passed 00:08:35.119 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:35.119 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:35.119 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:35.119 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:35.119 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:35.119 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-25 13:51:24.007433] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.007622] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.007812] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.007963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.008088] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 passed 00:08:35.119 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:35.119 Test: test_nvmf_tcp_icreq_handle ...[2024-07-25 13:51:24.008544] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:35.119 [2024-07-25 13:51:24.008769] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.008958] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.009099] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:35.119 [2024-07-25 13:51:24.009241] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.009386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.009526] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.009682] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.009865] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 passed 00:08:35.119 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:35.119 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-25 13:51:24.010348] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2563:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:35.119 [2024-07-25 13:51:24.010502] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.010628] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311550 is same with the state(5) to be set 00:08:35.119 passed 00:08:35.119 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-25 13:51:24.010947] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff063122b0 00:08:35.119 [2024-07-25 13:51:24.011142] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.011301] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.011438] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2352:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff06311a10 00:08:35.119 [2024-07-25 13:51:24.011582] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.011730] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.011861] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:35.119 [2024-07-25 13:51:24.012022] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.119 [2024-07-25 13:51:24.012184] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.119 [2024-07-25 13:51:24.012330] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:35.119 [2024-07-25 13:51:24.012466] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.012613] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.012757] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.012907] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.013075] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.013205] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.013351] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.013482] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.013626] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.013758] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.013933] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.014066] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 [2024-07-25 13:51:24.014227] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:35.120 [2024-07-25 13:51:24.014354] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff06311a10 is same with the state(5) to be set 00:08:35.120 passed 00:08:35.120 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:08:35.120 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-25 13:51:24.034114] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:35.120 [2024-07-25 13:51:24.034296] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:35.120 passed 00:08:35.120 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-25 13:51:24.034909] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:35.120 [2024-07-25 13:51:24.035083] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:35.120 passed 00:08:35.120 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-25 13:51:24.035582] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:35.120 [2024-07-25 13:51:24.035737] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:35.120 passed 00:08:35.120 00:08:35.120 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.120 suites 1 1 n/a 0 0 00:08:35.120 tests 17 17 17 0 0 00:08:35.120 asserts 222 222 222 0 n/a 00:08:35.120 00:08:35.120 Elapsed time = 0.129 seconds 00:08:35.120 13:51:24 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:35.120 00:08:35.120 00:08:35.120 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.120 http://cunit.sourceforge.net/ 00:08:35.120 00:08:35.120 00:08:35.120 Suite: nvmf 00:08:35.120 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:35.120 00:08:35.120 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.120 suites 1 1 n/a 0 0 00:08:35.120 tests 1 1 1 0 0 00:08:35.120 asserts 17 17 17 0 n/a 00:08:35.120 00:08:35.120 Elapsed time = 0.020 seconds 00:08:35.379 00:08:35.379 real 0m0.485s 00:08:35.379 user 0m0.202s 00:08:35.379 sys 0m0.255s 00:08:35.379 13:51:24 unittest.unittest_nvmf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.379 13:51:24 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:35.379 ************************************ 00:08:35.379 END TEST unittest_nvmf 00:08:35.379 ************************************ 00:08:35.379 13:51:24 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:35.379 13:51:24 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:35.379 13:51:24 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:35.379 ************************************ 00:08:35.379 START TEST unittest_nvmf_rdma 00:08:35.379 ************************************ 00:08:35.379 13:51:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:35.379 00:08:35.379 00:08:35.379 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.379 http://cunit.sourceforge.net/ 00:08:35.379 00:08:35.379 00:08:35.379 Suite: nvmf 00:08:35.379 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-25 13:51:24.257642] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:35.379 [2024-07-25 13:51:24.257992] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:35.379 [2024-07-25 13:51:24.258057] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:35.379 passed 00:08:35.379 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:35.379 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:35.379 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:35.379 Test: test_nvmf_rdma_opts_init ...passed 00:08:35.379 Test: test_nvmf_rdma_request_free_data ...passed 00:08:35.379 Test: test_nvmf_rdma_resources_create ...passed 00:08:35.379 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:35.379 Test: test_nvmf_rdma_resize_cq ...[2024-07-25 13:51:24.260523] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:35.379 Using CQ of insufficient size may lead to CQ overrun 00:08:35.379 passed 00:08:35.379 00:08:35.379 [2024-07-25 13:51:24.260641] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:35.379 [2024-07-25 13:51:24.260714] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:35.379 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.379 suites 1 1 n/a 0 0 00:08:35.379 tests 9 9 9 0 0 00:08:35.379 asserts 579 579 579 0 n/a 00:08:35.379 00:08:35.379 Elapsed time = 0.003 seconds 00:08:35.379 00:08:35.379 real 0m0.044s 00:08:35.379 user 0m0.030s 00:08:35.379 sys 0m0.014s 00:08:35.379 13:51:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.379 13:51:24 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:35.379 ************************************ 00:08:35.379 END TEST unittest_nvmf_rdma 00:08:35.379 ************************************ 00:08:35.379 13:51:24 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:35.379 13:51:24 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.379 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:35.379 ************************************ 00:08:35.379 START TEST unittest_scsi 00:08:35.379 ************************************ 00:08:35.379 13:51:24 unittest.unittest_scsi -- common/autotest_common.sh@1125 -- # unittest_scsi 00:08:35.379 13:51:24 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:35.379 00:08:35.379 00:08:35.379 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.379 http://cunit.sourceforge.net/ 00:08:35.379 00:08:35.379 00:08:35.379 Suite: dev_suite 00:08:35.379 Test: dev_destruct_null_dev ...passed 00:08:35.379 Test: dev_destruct_zero_luns ...passed 00:08:35.379 Test: dev_destruct_null_lun ...passed 00:08:35.379 Test: dev_destruct_success ...passed 00:08:35.379 Test: dev_construct_num_luns_zero ...[2024-07-25 13:51:24.356063] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:35.379 passed 00:08:35.379 Test: dev_construct_no_lun_zero ...[2024-07-25 13:51:24.356544] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:35.379 passed 00:08:35.379 Test: dev_construct_null_lun ...passed 00:08:35.379 Test: dev_construct_name_too_long ...[2024-07-25 13:51:24.356625] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:35.379 passed 00:08:35.379 Test: dev_construct_success ...passed 00:08:35.379 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:35.379 Test: dev_queue_mgmt_task_success ...[2024-07-25 13:51:24.356686] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:35.379 passed 00:08:35.379 Test: dev_queue_task_success ...passed 00:08:35.379 Test: dev_stop_success ...passed 00:08:35.379 Test: dev_add_port_max_ports ...[2024-07-25 13:51:24.356992] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:35.379 passed 00:08:35.379 Test: dev_add_port_construct_failure1 ...passed 00:08:35.379 Test: dev_add_port_construct_failure2 ...[2024-07-25 13:51:24.357111] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:35.379 [2024-07-25 13:51:24.357202] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:35.379 passed 00:08:35.379 Test: dev_add_port_success1 ...passed 00:08:35.379 Test: dev_add_port_success2 ...passed 00:08:35.379 Test: dev_add_port_success3 ...passed 00:08:35.379 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:35.380 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:35.380 Test: dev_find_port_by_id_success ...passed 00:08:35.380 Test: dev_add_lun_bdev_not_found ...passed 00:08:35.380 Test: dev_add_lun_no_free_lun_id ...[2024-07-25 13:51:24.357614] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:35.380 passed 00:08:35.380 Test: dev_add_lun_success1 ...passed 00:08:35.380 Test: dev_add_lun_success2 ...passed 00:08:35.380 Test: dev_check_pending_tasks ...passed 00:08:35.380 Test: dev_iterate_luns ...passed 00:08:35.380 Test: dev_find_free_lun ...passed 00:08:35.380 00:08:35.380 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.380 suites 1 1 n/a 0 0 00:08:35.380 tests 29 29 29 0 0 00:08:35.380 asserts 97 97 97 0 n/a 00:08:35.380 00:08:35.380 Elapsed time = 0.002 seconds 00:08:35.380 13:51:24 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:35.380 00:08:35.380 00:08:35.380 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.380 http://cunit.sourceforge.net/ 00:08:35.380 00:08:35.380 00:08:35.380 Suite: lun_suite 00:08:35.380 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:08:35.380 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-25 13:51:24.395659] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:35.380 [2024-07-25 13:51:24.395981] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:35.380 passed 00:08:35.380 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:35.380 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:35.380 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-25 13:51:24.396134] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:35.380 passed 00:08:35.380 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:35.380 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:35.380 Test: lun_append_task_null_lun_not_supported ...passed 00:08:35.380 Test: lun_execute_scsi_task_pending ...passed 00:08:35.380 Test: lun_execute_scsi_task_complete ...passed 00:08:35.380 Test: lun_execute_scsi_task_resize ...passed 00:08:35.380 Test: lun_destruct_success ...passed 00:08:35.380 Test: lun_construct_null_ctx ...passed 00:08:35.380 Test: lun_construct_success ...passed 00:08:35.380 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-25 13:51:24.396337] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:35.380 passed 00:08:35.380 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:35.380 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:35.380 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:35.380 00:08:35.380 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.380 suites 1 1 n/a 0 0 00:08:35.380 tests 18 18 18 0 0 00:08:35.380 asserts 153 153 153 0 n/a 00:08:35.380 00:08:35.380 Elapsed time = 0.001 seconds 00:08:35.380 13:51:24 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:35.639 00:08:35.639 00:08:35.639 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.639 http://cunit.sourceforge.net/ 00:08:35.639 00:08:35.639 00:08:35.639 Suite: scsi_suite 00:08:35.639 Test: scsi_init ...passed 00:08:35.639 00:08:35.639 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.639 suites 1 1 n/a 0 0 00:08:35.639 tests 1 1 1 0 0 00:08:35.639 asserts 1 1 1 0 n/a 00:08:35.639 00:08:35.639 Elapsed time = 0.000 seconds 00:08:35.639 13:51:24 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:35.639 00:08:35.639 00:08:35.639 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.639 http://cunit.sourceforge.net/ 00:08:35.639 00:08:35.639 00:08:35.639 Suite: translation_suite 00:08:35.639 Test: mode_select_6_test ...passed 00:08:35.639 Test: mode_select_6_test2 ...passed 00:08:35.639 Test: mode_sense_6_test ...passed 00:08:35.639 Test: mode_sense_10_test ...passed 00:08:35.639 Test: inquiry_evpd_test ...passed 00:08:35.639 Test: inquiry_standard_test ...passed 00:08:35.639 Test: inquiry_overflow_test ...passed 00:08:35.639 Test: task_complete_test ...passed 00:08:35.639 Test: lba_range_test ...passed 00:08:35.639 Test: xfer_len_test ...[2024-07-25 13:51:24.458578] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:35.639 passed 00:08:35.639 Test: xfer_test ...passed 00:08:35.639 Test: scsi_name_padding_test ...passed 00:08:35.639 Test: get_dif_ctx_test ...passed 00:08:35.639 Test: unmap_split_test ...passed 00:08:35.639 00:08:35.639 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.639 suites 1 1 n/a 0 0 00:08:35.639 tests 14 14 14 0 0 00:08:35.639 asserts 1205 1205 1205 0 n/a 00:08:35.639 00:08:35.639 Elapsed time = 0.004 seconds 00:08:35.639 13:51:24 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:35.639 00:08:35.639 00:08:35.639 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.639 http://cunit.sourceforge.net/ 00:08:35.639 00:08:35.639 00:08:35.639 Suite: reservation_suite 00:08:35.639 Test: test_reservation_register ...[2024-07-25 13:51:24.484527] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 passed 00:08:35.639 Test: test_reservation_reserve ...[2024-07-25 13:51:24.484839] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 [2024-07-25 13:51:24.484990] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:35.639 [2024-07-25 13:51:24.485098] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:35.639 passed 00:08:35.639 Test: test_all_registrant_reservation_reserve ...passed 00:08:35.639 Test: test_all_registrant_reservation_access ...[2024-07-25 13:51:24.485165] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 [2024-07-25 13:51:24.485272] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 passed 00:08:35.639 Test: test_reservation_preempt_non_all_regs ...[2024-07-25 13:51:24.485333] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:35.639 [2024-07-25 13:51:24.485387] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:35.639 [2024-07-25 13:51:24.485448] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 [2024-07-25 13:51:24.485516] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:35.639 passed 00:08:35.639 Test: test_reservation_preempt_all_regs ...[2024-07-25 13:51:24.485637] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 passed 00:08:35.639 Test: test_reservation_cmds_conflict ...[2024-07-25 13:51:24.485738] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 [2024-07-25 13:51:24.485807] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:35.639 [2024-07-25 13:51:24.485872] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:35.639 passed 00:08:35.639 Test: test_scsi2_reserve_release ...passed 00:08:35.639 Test: test_pr_with_scsi2_reserve_release ...[2024-07-25 13:51:24.485909] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:35.639 [2024-07-25 13:51:24.485947] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:35.639 [2024-07-25 13:51:24.485977] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:35.639 [2024-07-25 13:51:24.486048] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:35.639 passed 00:08:35.639 00:08:35.639 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.639 suites 1 1 n/a 0 0 00:08:35.639 tests 9 9 9 0 0 00:08:35.639 asserts 344 344 344 0 n/a 00:08:35.639 00:08:35.639 Elapsed time = 0.002 seconds 00:08:35.639 00:08:35.639 real 0m0.164s 00:08:35.639 user 0m0.069s 00:08:35.639 sys 0m0.093s 00:08:35.639 13:51:24 unittest.unittest_scsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.639 13:51:24 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:35.639 ************************************ 00:08:35.639 END TEST unittest_scsi 00:08:35.639 ************************************ 00:08:35.639 13:51:24 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:35.639 13:51:24 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:35.639 13:51:24 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:35.639 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.639 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.639 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:35.639 ************************************ 00:08:35.639 START TEST unittest_sock 00:08:35.639 ************************************ 00:08:35.640 13:51:24 unittest.unittest_sock -- common/autotest_common.sh@1125 -- # unittest_sock 00:08:35.640 13:51:24 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:35.640 00:08:35.640 00:08:35.640 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.640 http://cunit.sourceforge.net/ 00:08:35.640 00:08:35.640 00:08:35.640 Suite: sock 00:08:35.640 Test: posix_sock ...passed 00:08:35.640 Test: ut_sock ...passed 00:08:35.640 Test: posix_sock_group ...passed 00:08:35.640 Test: ut_sock_group ...passed 00:08:35.640 Test: posix_sock_group_fairness ...passed 00:08:35.640 Test: _posix_sock_close ...passed 00:08:35.640 Test: sock_get_default_opts ...passed 00:08:35.640 Test: ut_sock_impl_get_set_opts ...passed 00:08:35.640 Test: posix_sock_impl_get_set_opts ...passed 00:08:35.640 Test: ut_sock_map ...passed 00:08:35.640 Test: override_impl_opts ...passed 00:08:35.640 Test: ut_sock_group_get_ctx ...passed 00:08:35.640 Test: posix_get_interface_name ...passed 00:08:35.640 00:08:35.640 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.640 suites 1 1 n/a 0 0 00:08:35.640 tests 13 13 13 0 0 00:08:35.640 asserts 360 360 360 0 n/a 00:08:35.640 00:08:35.640 Elapsed time = 0.010 seconds 00:08:35.640 13:51:24 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:35.640 00:08:35.640 00:08:35.640 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.640 http://cunit.sourceforge.net/ 00:08:35.640 00:08:35.640 00:08:35.640 Suite: posix 00:08:35.640 Test: flush ...passed 00:08:35.640 00:08:35.640 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.640 suites 1 1 n/a 0 0 00:08:35.640 tests 1 1 1 0 0 00:08:35.640 asserts 28 28 28 0 n/a 00:08:35.640 00:08:35.640 Elapsed time = 0.000 seconds 00:08:35.640 13:51:24 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:35.640 00:08:35.640 real 0m0.116s 00:08:35.640 user 0m0.049s 00:08:35.640 sys 0m0.044s 00:08:35.640 13:51:24 unittest.unittest_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.640 13:51:24 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:35.640 ************************************ 00:08:35.640 END TEST unittest_sock 00:08:35.640 ************************************ 00:08:35.899 13:51:24 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:35.899 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.899 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.899 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:35.899 ************************************ 00:08:35.899 START TEST unittest_thread 00:08:35.899 ************************************ 00:08:35.899 13:51:24 unittest.unittest_thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:35.899 00:08:35.899 00:08:35.899 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.899 http://cunit.sourceforge.net/ 00:08:35.899 00:08:35.899 00:08:35.899 Suite: io_channel 00:08:35.899 Test: thread_alloc ...passed 00:08:35.899 Test: thread_send_msg ...passed 00:08:35.899 Test: thread_poller ...passed 00:08:35.899 Test: poller_pause ...passed 00:08:35.899 Test: thread_for_each ...passed 00:08:35.899 Test: for_each_channel_remove ...passed 00:08:35.899 Test: for_each_channel_unreg ...[2024-07-25 13:51:24.760875] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7ffd7afff300 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:35.899 passed 00:08:35.899 Test: thread_name ...passed 00:08:35.899 Test: channel ...[2024-07-25 13:51:24.765111] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x558eaa5c9180 00:08:35.899 passed 00:08:35.899 Test: channel_destroy_races ...passed 00:08:35.899 Test: thread_exit_test ...[2024-07-25 13:51:24.770462] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:35.899 passed 00:08:35.899 Test: thread_update_stats_test ...passed 00:08:35.899 Test: nested_channel ...passed 00:08:35.899 Test: device_unregister_and_thread_exit_race ...passed 00:08:35.899 Test: cache_closest_timed_poller ...passed 00:08:35.899 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:35.899 Test: io_device_lookup ...passed 00:08:35.899 Test: spdk_spin ...[2024-07-25 13:51:24.781524] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:35.899 [2024-07-25 13:51:24.781613] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd7afff2f0 00:08:35.899 [2024-07-25 13:51:24.781724] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:35.899 [2024-07-25 13:51:24.783595] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:35.899 [2024-07-25 13:51:24.783735] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd7afff2f0 00:08:35.899 [2024-07-25 13:51:24.783784] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:35.899 [2024-07-25 13:51:24.783822] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd7afff2f0 00:08:35.899 [2024-07-25 13:51:24.783863] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:35.899 [2024-07-25 13:51:24.783903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd7afff2f0 00:08:35.899 [2024-07-25 13:51:24.783958] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:35.899 [2024-07-25 13:51:24.784020] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd7afff2f0 00:08:35.899 passed 00:08:35.899 Test: for_each_channel_and_thread_exit_race ...passed 00:08:35.899 Test: for_each_thread_and_thread_exit_race ...passed 00:08:35.899 00:08:35.899 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.899 suites 1 1 n/a 0 0 00:08:35.899 tests 20 20 20 0 0 00:08:35.899 asserts 409 409 409 0 n/a 00:08:35.899 00:08:35.899 Elapsed time = 0.051 seconds 00:08:35.899 00:08:35.899 real 0m0.090s 00:08:35.899 user 0m0.073s 00:08:35.899 sys 0m0.017s 00:08:35.899 13:51:24 unittest.unittest_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.899 13:51:24 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.899 ************************************ 00:08:35.899 END TEST unittest_thread 00:08:35.899 ************************************ 00:08:35.899 13:51:24 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:35.899 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:35.899 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.900 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:35.900 ************************************ 00:08:35.900 START TEST unittest_iobuf 00:08:35.900 ************************************ 00:08:35.900 13:51:24 unittest.unittest_iobuf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:35.900 00:08:35.900 00:08:35.900 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.900 http://cunit.sourceforge.net/ 00:08:35.900 00:08:35.900 00:08:35.900 Suite: io_channel 00:08:35.900 Test: iobuf ...passed 00:08:35.900 Test: iobuf_cache ...[2024-07-25 13:51:24.897732] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:35.900 [2024-07-25 13:51:24.898230] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:35.900 [2024-07-25 13:51:24.898453] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:35.900 [2024-07-25 13:51:24.898545] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:35.900 [2024-07-25 13:51:24.898660] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:35.900 [2024-07-25 13:51:24.898750] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:35.900 passed 00:08:35.900 Test: iobuf_priority ...passed 00:08:35.900 00:08:35.900 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.900 suites 1 1 n/a 0 0 00:08:35.900 tests 3 3 3 0 0 00:08:35.900 asserts 131 131 131 0 n/a 00:08:35.900 00:08:35.900 Elapsed time = 0.011 seconds 00:08:35.900 00:08:35.900 real 0m0.050s 00:08:35.900 user 0m0.030s 00:08:35.900 sys 0m0.020s 00:08:35.900 13:51:24 unittest.unittest_iobuf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.900 13:51:24 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:35.900 ************************************ 00:08:35.900 END TEST unittest_iobuf 00:08:35.900 ************************************ 00:08:36.159 13:51:24 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:36.159 13:51:24 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.159 13:51:24 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.159 13:51:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:36.159 ************************************ 00:08:36.159 START TEST unittest_util 00:08:36.159 ************************************ 00:08:36.159 13:51:24 unittest.unittest_util -- common/autotest_common.sh@1125 -- # unittest_util 00:08:36.159 13:51:24 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:36.159 00:08:36.159 00:08:36.159 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.159 http://cunit.sourceforge.net/ 00:08:36.159 00:08:36.159 00:08:36.159 Suite: base64 00:08:36.159 Test: test_base64_get_encoded_strlen ...passed 00:08:36.159 Test: test_base64_get_decoded_len ...passed 00:08:36.159 Test: test_base64_encode ...passed 00:08:36.159 Test: test_base64_decode ...passed 00:08:36.159 Test: test_base64_urlsafe_encode ...passed 00:08:36.159 Test: test_base64_urlsafe_decode ...passed 00:08:36.159 00:08:36.159 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.159 suites 1 1 n/a 0 0 00:08:36.159 tests 6 6 6 0 0 00:08:36.159 asserts 112 112 112 0 n/a 00:08:36.159 00:08:36.159 Elapsed time = 0.000 seconds 00:08:36.159 13:51:25 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:36.159 00:08:36.159 00:08:36.159 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.159 http://cunit.sourceforge.net/ 00:08:36.159 00:08:36.159 00:08:36.159 Suite: bit_array 00:08:36.159 Test: test_1bit ...passed 00:08:36.159 Test: test_64bit ...passed 00:08:36.159 Test: test_find ...passed 00:08:36.160 Test: test_resize ...passed 00:08:36.160 Test: test_errors ...passed 00:08:36.160 Test: test_count ...passed 00:08:36.160 Test: test_mask_store_load ...passed 00:08:36.160 Test: test_mask_clear ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 8 8 8 0 0 00:08:36.160 asserts 5075 5075 5075 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.002 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:36.160 00:08:36.160 00:08:36.160 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.160 http://cunit.sourceforge.net/ 00:08:36.160 00:08:36.160 00:08:36.160 Suite: cpuset 00:08:36.160 Test: test_cpuset ...passed 00:08:36.160 Test: test_cpuset_parse ...[2024-07-25 13:51:25.046608] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:36.160 [2024-07-25 13:51:25.046905] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:36.160 [2024-07-25 13:51:25.047017] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:36.160 [2024-07-25 13:51:25.047124] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:36.160 [2024-07-25 13:51:25.047173] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:36.160 [2024-07-25 13:51:25.047220] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:36.160 [2024-07-25 13:51:25.047262] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:36.160 [2024-07-25 13:51:25.047314] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:36.160 passed 00:08:36.160 Test: test_cpuset_fmt ...passed 00:08:36.160 Test: test_cpuset_foreach ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 4 4 4 0 0 00:08:36.160 asserts 90 90 90 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.003 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:36.160 00:08:36.160 00:08:36.160 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.160 http://cunit.sourceforge.net/ 00:08:36.160 00:08:36.160 00:08:36.160 Suite: crc16 00:08:36.160 Test: test_crc16_t10dif ...passed 00:08:36.160 Test: test_crc16_t10dif_seed ...passed 00:08:36.160 Test: test_crc16_t10dif_copy ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 3 3 3 0 0 00:08:36.160 asserts 5 5 5 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.000 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:36.160 00:08:36.160 00:08:36.160 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.160 http://cunit.sourceforge.net/ 00:08:36.160 00:08:36.160 00:08:36.160 Suite: crc32_ieee 00:08:36.160 Test: test_crc32_ieee ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 1 1 1 0 0 00:08:36.160 asserts 1 1 1 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.000 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:36.160 00:08:36.160 00:08:36.160 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.160 http://cunit.sourceforge.net/ 00:08:36.160 00:08:36.160 00:08:36.160 Suite: crc32c 00:08:36.160 Test: test_crc32c ...passed 00:08:36.160 Test: test_crc32c_nvme ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 2 2 2 0 0 00:08:36.160 asserts 16 16 16 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.000 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:36.160 00:08:36.160 00:08:36.160 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.160 http://cunit.sourceforge.net/ 00:08:36.160 00:08:36.160 00:08:36.160 Suite: crc64 00:08:36.160 Test: test_crc64_nvme ...passed 00:08:36.160 00:08:36.160 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.160 suites 1 1 n/a 0 0 00:08:36.160 tests 1 1 1 0 0 00:08:36.160 asserts 4 4 4 0 n/a 00:08:36.160 00:08:36.160 Elapsed time = 0.000 seconds 00:08:36.160 13:51:25 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:36.421 00:08:36.421 00:08:36.421 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.421 http://cunit.sourceforge.net/ 00:08:36.421 00:08:36.421 00:08:36.421 Suite: string 00:08:36.421 Test: test_parse_ip_addr ...passed 00:08:36.421 Test: test_str_chomp ...passed 00:08:36.421 Test: test_parse_capacity ...passed 00:08:36.421 Test: test_sprintf_append_realloc ...passed 00:08:36.421 Test: test_strtol ...passed 00:08:36.421 Test: test_strtoll ...passed 00:08:36.421 Test: test_strarray ...passed 00:08:36.421 Test: test_strcpy_replace ...passed 00:08:36.421 00:08:36.421 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.421 suites 1 1 n/a 0 0 00:08:36.421 tests 8 8 8 0 0 00:08:36.421 asserts 161 161 161 0 n/a 00:08:36.421 00:08:36.421 Elapsed time = 0.001 seconds 00:08:36.421 13:51:25 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:36.421 00:08:36.421 00:08:36.421 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.421 http://cunit.sourceforge.net/ 00:08:36.421 00:08:36.421 00:08:36.421 Suite: dif 00:08:36.421 Test: dif_generate_and_verify_test ...[2024-07-25 13:51:25.236863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:36.421 [2024-07-25 13:51:25.237888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:36.421 [2024-07-25 13:51:25.238346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:36.421 [2024-07-25 13:51:25.238778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:36.421 [2024-07-25 13:51:25.239255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:36.421 [2024-07-25 13:51:25.239688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:36.421 passed 00:08:36.421 Test: dif_disable_check_test ...[2024-07-25 13:51:25.240885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:36.421 [2024-07-25 13:51:25.241351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:36.421 [2024-07-25 13:51:25.241762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:36.421 passed 00:08:36.421 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-25 13:51:25.243013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:36.421 [2024-07-25 13:51:25.243452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:36.421 [2024-07-25 13:51:25.243917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:36.421 [2024-07-25 13:51:25.244422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:36.421 [2024-07-25 13:51:25.244909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:36.421 [2024-07-25 13:51:25.245367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:36.421 [2024-07-25 13:51:25.245828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:36.421 [2024-07-25 13:51:25.246261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:36.421 [2024-07-25 13:51:25.246699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:36.421 [2024-07-25 13:51:25.247166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:36.421 [2024-07-25 13:51:25.247631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:36.421 passed 00:08:36.421 Test: dif_apptag_mask_test ...[2024-07-25 13:51:25.248110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:36.421 [2024-07-25 13:51:25.248562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:36.421 passed 00:08:36.421 Test: dif_sec_8_md_8_error_test ...[2024-07-25 13:51:25.248891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:36.421 passed 00:08:36.421 Test: dif_sec_512_md_0_error_test ...[2024-07-25 13:51:25.249122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.421 passed 00:08:36.421 Test: dif_sec_512_md_16_error_test ...[2024-07-25 13:51:25.249324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.421 [2024-07-25 13:51:25.249502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.421 passed 00:08:36.421 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-25 13:51:25.249693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.421 [2024-07-25 13:51:25.249940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.421 [2024-07-25 13:51:25.250127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.421 [2024-07-25 13:51:25.250314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.421 passed 00:08:36.421 Test: dif_sec_4100_md_128_error_test ...[2024-07-25 13:51:25.250506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.421 [2024-07-25 13:51:25.250701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.421 passed 00:08:36.421 Test: dif_guard_seed_test ...passed 00:08:36.421 Test: dif_guard_value_test ...passed 00:08:36.421 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:36.421 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:36.421 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 13:51:25.295361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=dd4c, Actual=fd4c 00:08:36.422 [2024-07-25 13:51:25.298046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=de21, Actual=fe21 00:08:36.422 [2024-07-25 13:51:25.300682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.303311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.305935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.422 [2024-07-25 13:51:25.308574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.422 [2024-07-25 13:51:25.311187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=bb3f 00:08:36.422 [2024-07-25 13:51:25.313731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=a481 00:08:36.422 [2024-07-25 13:51:25.316370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3ab753ed, Actual=1ab753ed 00:08:36.422 [2024-07-25 13:51:25.318993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=18574660, Actual=38574660 00:08:36.422 [2024-07-25 13:51:25.321626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.324281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.326908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.422 [2024-07-25 13:51:25.329519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.422 [2024-07-25 13:51:25.332130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=227632b7 00:08:36.422 [2024-07-25 13:51:25.334682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=4596e48f 00:08:36.422 [2024-07-25 13:51:25.337226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.422 [2024-07-25 13:51:25.339865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.422 [2024-07-25 13:51:25.342491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.345109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.347706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.422 [2024-07-25 13:51:25.350349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.422 [2024-07-25 13:51:25.352953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.422 [2024-07-25 13:51:25.355514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.422 passed 00:08:36.422 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-25 13:51:25.357157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:36.422 [2024-07-25 13:51:25.357595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:36.422 [2024-07-25 13:51:25.358047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.358480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.358910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.359383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.359830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=bb3f 00:08:36.422 [2024-07-25 13:51:25.360190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a481 00:08:36.422 [2024-07-25 13:51:25.360562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:36.422 [2024-07-25 13:51:25.361001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:36.422 [2024-07-25 13:51:25.361425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.361897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.362328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.362760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.363193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=227632b7 00:08:36.422 [2024-07-25 13:51:25.363578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4596e48f 00:08:36.422 [2024-07-25 13:51:25.363939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.422 [2024-07-25 13:51:25.364386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.422 [2024-07-25 13:51:25.364835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.365273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.365710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.422 [2024-07-25 13:51:25.366399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.422 [2024-07-25 13:51:25.366940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.422 [2024-07-25 13:51:25.367352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.422 passed 00:08:36.422 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-25 13:51:25.367820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:36.422 [2024-07-25 13:51:25.368287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:36.422 [2024-07-25 13:51:25.368847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.369372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.369964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.370523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.371072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=bb3f 00:08:36.422 [2024-07-25 13:51:25.371496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a481 00:08:36.422 [2024-07-25 13:51:25.371911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:36.422 [2024-07-25 13:51:25.372467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:36.422 [2024-07-25 13:51:25.373046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.373639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.374220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.374797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.422 [2024-07-25 13:51:25.375354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=227632b7 00:08:36.422 [2024-07-25 13:51:25.375781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4596e48f 00:08:36.422 [2024-07-25 13:51:25.376202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.422 [2024-07-25 13:51:25.376796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.422 [2024-07-25 13:51:25.377338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.377903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.422 [2024-07-25 13:51:25.378480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.422 [2024-07-25 13:51:25.379048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.422 [2024-07-25 13:51:25.379592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.422 [2024-07-25 13:51:25.380059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.422 passed 00:08:36.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-25 13:51:25.380550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:36.423 [2024-07-25 13:51:25.381082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:36.423 [2024-07-25 13:51:25.381389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.381705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.382036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.382366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.382676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=bb3f 00:08:36.423 [2024-07-25 13:51:25.382919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a481 00:08:36.423 [2024-07-25 13:51:25.383156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:36.423 [2024-07-25 13:51:25.383457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:36.423 [2024-07-25 13:51:25.383766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.384085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.384398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.384711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.385029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=227632b7 00:08:36.423 [2024-07-25 13:51:25.385266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4596e48f 00:08:36.423 [2024-07-25 13:51:25.385507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.423 [2024-07-25 13:51:25.385840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.423 [2024-07-25 13:51:25.386149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.386462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.386761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.423 [2024-07-25 13:51:25.387068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.423 [2024-07-25 13:51:25.387375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.423 [2024-07-25 13:51:25.387631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.423 passed 00:08:36.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-25 13:51:25.387909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:36.423 [2024-07-25 13:51:25.388232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:36.423 [2024-07-25 13:51:25.388541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.388851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.389169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.389493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.389803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=bb3f 00:08:36.423 [2024-07-25 13:51:25.390035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a481 00:08:36.423 passed 00:08:36.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-25 13:51:25.390312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:36.423 [2024-07-25 13:51:25.390623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:36.423 [2024-07-25 13:51:25.390937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.391264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.391575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.391882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.392190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=227632b7 00:08:36.423 [2024-07-25 13:51:25.392417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4596e48f 00:08:36.423 [2024-07-25 13:51:25.392697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.423 [2024-07-25 13:51:25.393027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.423 [2024-07-25 13:51:25.393338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.393641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.393963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.423 [2024-07-25 13:51:25.394279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.423 [2024-07-25 13:51:25.394586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.423 [2024-07-25 13:51:25.394845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.423 passed 00:08:36.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-25 13:51:25.395116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:36.423 [2024-07-25 13:51:25.395427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:36.423 [2024-07-25 13:51:25.395737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.396045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.396350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.396682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.396990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=bb3f 00:08:36.423 [2024-07-25 13:51:25.397223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=a481 00:08:36.423 passed 00:08:36.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-25 13:51:25.397513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3ab753ed, Actual=1ab753ed 00:08:36.423 [2024-07-25 13:51:25.397832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18574660, Actual=38574660 00:08:36.423 [2024-07-25 13:51:25.398137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.398469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.398777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.399084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000058 00:08:36.423 [2024-07-25 13:51:25.399404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=227632b7 00:08:36.423 [2024-07-25 13:51:25.399639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=4596e48f 00:08:36.423 [2024-07-25 13:51:25.399890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.423 [2024-07-25 13:51:25.400210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d6837a266, Actual=88010a2d4837a266 00:08:36.423 [2024-07-25 13:51:25.400533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.400833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:36.423 [2024-07-25 13:51:25.401154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.424 [2024-07-25 13:51:25.401450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:36.424 [2024-07-25 13:51:25.401756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.424 [2024-07-25 13:51:25.402019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=6c1c3ff1b8fcee99 00:08:36.424 passed 00:08:36.424 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:36.424 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:36.424 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:36.424 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:36.424 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 13:51:25.446146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=dd4c, Actual=fd4c 00:08:36.424 [2024-07-25 13:51:25.447304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fee4, Actual=dee4 00:08:36.424 [2024-07-25 13:51:25.448425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.424 [2024-07-25 13:51:25.449610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.424 [2024-07-25 13:51:25.450988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.424 [2024-07-25 13:51:25.452137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.424 [2024-07-25 13:51:25.453287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=bb3f 00:08:36.424 [2024-07-25 13:51:25.454426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=1b7 00:08:36.424 [2024-07-25 13:51:25.455566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3ab753ed, Actual=1ab753ed 00:08:36.424 [2024-07-25 13:51:25.456703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=6c73b30a, Actual=4c73b30a 00:08:36.683 [2024-07-25 13:51:25.457852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.683 [2024-07-25 13:51:25.458982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.683 [2024-07-25 13:51:25.460135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.683 [2024-07-25 13:51:25.461271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.683 [2024-07-25 13:51:25.462432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=227632b7 00:08:36.683 [2024-07-25 13:51:25.463559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=78cc3ad0 00:08:36.683 [2024-07-25 13:51:25.464699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.683 [2024-07-25 13:51:25.465854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=2f5cd9b222d780f8, Actual=2f5cd9b202d780f8 00:08:36.683 [2024-07-25 13:51:25.467018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.468144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.469291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.684 [2024-07-25 13:51:25.470431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.684 [2024-07-25 13:51:25.471566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.684 [2024-07-25 13:51:25.472694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=d8d4375a625b9c6d 00:08:36.684 passed 00:08:36.684 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 13:51:25.473110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:36.684 [2024-07-25 13:51:25.473393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:36.684 [2024-07-25 13:51:25.473678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.473976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.474278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.684 [2024-07-25 13:51:25.474576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.684 [2024-07-25 13:51:25.474860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=bb3f 00:08:36.684 [2024-07-25 13:51:25.475131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=c860 00:08:36.684 [2024-07-25 13:51:25.475410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3ab753ed, Actual=1ab753ed 00:08:36.684 [2024-07-25 13:51:25.475679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f2c37b58, Actual=d2c37b58 00:08:36.684 [2024-07-25 13:51:25.475953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.476250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.476533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.684 [2024-07-25 13:51:25.476809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.684 [2024-07-25 13:51:25.477103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=227632b7 00:08:36.684 [2024-07-25 13:51:25.477420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=e67cf282 00:08:36.684 [2024-07-25 13:51:25.477707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.684 [2024-07-25 13:51:25.478022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5bbd3b5ff9036505, Actual=5bbd3b5fd9036505 00:08:36.684 [2024-07-25 13:51:25.478302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.478583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.478862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:36.684 [2024-07-25 13:51:25.479138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:36.684 [2024-07-25 13:51:25.479416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.684 [2024-07-25 13:51:25.479727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=ac35d5b7b98f7990 00:08:36.684 passed 00:08:36.684 Test: dix_sec_0_md_8_error ...[2024-07-25 13:51:25.479797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:36.684 passed 00:08:36.684 Test: dix_sec_512_md_0_error ...[2024-07-25 13:51:25.479858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.684 passed 00:08:36.684 Test: dix_sec_512_md_16_error ...[2024-07-25 13:51:25.479901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.684 passed 00:08:36.684 Test: dix_sec_4096_md_0_8_error ...[2024-07-25 13:51:25.479950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:36.684 [2024-07-25 13:51:25.479994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.684 [2024-07-25 13:51:25.480044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.684 passed 00:08:36.684 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-25 13:51:25.480080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.684 [2024-07-25 13:51:25.480122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:36.684 passed 00:08:36.684 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:36.684 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:36.684 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:36.684 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:36.684 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:36.684 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:36.684 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:36.684 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:36.684 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 13:51:25.523671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=dd4c, Actual=fd4c 00:08:36.684 [2024-07-25 13:51:25.524817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fee4, Actual=dee4 00:08:36.684 [2024-07-25 13:51:25.525946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.527065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.528216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.684 [2024-07-25 13:51:25.529327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.684 [2024-07-25 13:51:25.530453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=bb3f 00:08:36.684 [2024-07-25 13:51:25.531575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=1b7 00:08:36.684 [2024-07-25 13:51:25.532712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3ab753ed, Actual=1ab753ed 00:08:36.684 [2024-07-25 13:51:25.533841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=6c73b30a, Actual=4c73b30a 00:08:36.684 [2024-07-25 13:51:25.534984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.536088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.537214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.684 [2024-07-25 13:51:25.538361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000061 00:08:36.684 [2024-07-25 13:51:25.539481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=227632b7 00:08:36.684 [2024-07-25 13:51:25.540604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=78cc3ad0 00:08:36.684 [2024-07-25 13:51:25.541755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.684 [2024-07-25 13:51:25.542886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=2f5cd9b222d780f8, Actual=2f5cd9b202d780f8 00:08:36.684 [2024-07-25 13:51:25.544013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.545143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.546287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.684 [2024-07-25 13:51:25.547410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2061 00:08:36.684 [2024-07-25 13:51:25.548553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.684 passed 00:08:36.684 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 13:51:25.549682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=d8d4375a625b9c6d 00:08:36.684 [2024-07-25 13:51:25.550057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:36.684 [2024-07-25 13:51:25.550351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:36.684 [2024-07-25 13:51:25.550630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.550922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.684 [2024-07-25 13:51:25.551222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.685 [2024-07-25 13:51:25.551496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.685 [2024-07-25 13:51:25.551763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=bb3f 00:08:36.685 [2024-07-25 13:51:25.552037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=c860 00:08:36.685 [2024-07-25 13:51:25.552302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3ab753ed, Actual=1ab753ed 00:08:36.685 [2024-07-25 13:51:25.552594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f2c37b58, Actual=d2c37b58 00:08:36.685 [2024-07-25 13:51:25.552872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.685 [2024-07-25 13:51:25.553142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.685 [2024-07-25 13:51:25.553417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.685 [2024-07-25 13:51:25.553690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000059 00:08:36.685 [2024-07-25 13:51:25.553974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=227632b7 00:08:36.685 [2024-07-25 13:51:25.554257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=e67cf282 00:08:36.685 [2024-07-25 13:51:25.554549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a772aecc20d3, Actual=a576a7728ecc20d3 00:08:36.685 [2024-07-25 13:51:25.554824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5bbd3b5ff9036505, Actual=5bbd3b5fd9036505 00:08:36.685 [2024-07-25 13:51:25.555091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.685 [2024-07-25 13:51:25.555372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:36.685 [2024-07-25 13:51:25.555638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:36.685 [2024-07-25 13:51:25.555910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:36.685 [2024-07-25 13:51:25.556225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=66eecaedbcf04655 00:08:36.685 [2024-07-25 13:51:25.556521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=ac35d5b7b98f7990 00:08:36.685 passed 00:08:36.685 Test: set_md_interleave_iovs_test ...passed 00:08:36.685 Test: set_md_interleave_iovs_split_test ...passed 00:08:36.685 Test: dif_generate_stream_pi_16_test ...passed 00:08:36.685 Test: dif_generate_stream_test ...passed 00:08:36.685 Test: set_md_interleave_iovs_alignment_test ...[2024-07-25 13:51:25.563944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:36.685 passed 00:08:36.685 Test: dif_generate_split_test ...passed 00:08:36.685 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:36.685 Test: dif_verify_split_test ...passed 00:08:36.685 Test: dif_verify_stream_multi_segments_test ...passed 00:08:36.685 Test: update_crc32c_pi_16_test ...passed 00:08:36.685 Test: update_crc32c_test ...passed 00:08:36.685 Test: dif_update_crc32c_split_test ...passed 00:08:36.685 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:36.685 Test: get_range_with_md_test ...passed 00:08:36.685 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:36.685 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:36.685 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:36.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:36.685 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:36.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:36.685 Test: dif_generate_and_verify_unmap_test ...passed 00:08:36.685 Test: dif_pi_format_check_test ...passed 00:08:36.685 Test: dif_type_check_test ...passed 00:08:36.685 00:08:36.685 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.685 suites 1 1 n/a 0 0 00:08:36.685 tests 86 86 86 0 0 00:08:36.685 asserts 3605 3605 3605 0 n/a 00:08:36.685 00:08:36.685 Elapsed time = 0.365 seconds 00:08:36.685 13:51:25 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:36.685 00:08:36.685 00:08:36.685 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.685 http://cunit.sourceforge.net/ 00:08:36.685 00:08:36.685 00:08:36.685 Suite: iov 00:08:36.685 Test: test_single_iov ...passed 00:08:36.685 Test: test_simple_iov ...passed 00:08:36.685 Test: test_complex_iov ...passed 00:08:36.685 Test: test_iovs_to_buf ...passed 00:08:36.685 Test: test_buf_to_iovs ...passed 00:08:36.685 Test: test_memset ...passed 00:08:36.685 Test: test_iov_one ...passed 00:08:36.685 Test: test_iov_xfer ...passed 00:08:36.685 00:08:36.685 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.685 suites 1 1 n/a 0 0 00:08:36.685 tests 8 8 8 0 0 00:08:36.685 asserts 156 156 156 0 n/a 00:08:36.685 00:08:36.685 Elapsed time = 0.000 seconds 00:08:36.685 13:51:25 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:36.685 00:08:36.685 00:08:36.685 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.685 http://cunit.sourceforge.net/ 00:08:36.685 00:08:36.685 00:08:36.685 Suite: math 00:08:36.685 Test: test_serial_number_arithmetic ...passed 00:08:36.685 Suite: erase 00:08:36.685 Test: test_memset_s ...passed 00:08:36.685 00:08:36.685 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.685 suites 2 2 n/a 0 0 00:08:36.685 tests 2 2 2 0 0 00:08:36.685 asserts 18 18 18 0 n/a 00:08:36.685 00:08:36.685 Elapsed time = 0.000 seconds 00:08:36.685 13:51:25 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:36.685 00:08:36.685 00:08:36.685 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.685 http://cunit.sourceforge.net/ 00:08:36.685 00:08:36.685 00:08:36.685 Suite: pipe 00:08:36.685 Test: test_create_destroy ...passed 00:08:36.685 Test: test_write_get_buffer ...passed 00:08:36.685 Test: test_write_advance ...passed 00:08:36.685 Test: test_read_get_buffer ...passed 00:08:36.685 Test: test_read_advance ...passed 00:08:36.685 Test: test_data ...passed 00:08:36.685 00:08:36.685 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.685 suites 1 1 n/a 0 0 00:08:36.685 tests 6 6 6 0 0 00:08:36.685 asserts 251 251 251 0 n/a 00:08:36.685 00:08:36.685 Elapsed time = 0.000 seconds 00:08:36.945 13:51:25 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:36.945 00:08:36.945 00:08:36.945 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.945 http://cunit.sourceforge.net/ 00:08:36.945 00:08:36.945 00:08:36.945 Suite: xor 00:08:36.945 Test: test_xor_gen ...passed 00:08:36.945 00:08:36.945 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.945 suites 1 1 n/a 0 0 00:08:36.945 tests 1 1 1 0 0 00:08:36.945 asserts 17 17 17 0 n/a 00:08:36.945 00:08:36.945 Elapsed time = 0.007 seconds 00:08:36.945 00:08:36.945 real 0m0.793s 00:08:36.945 user 0m0.569s 00:08:36.945 sys 0m0.218s 00:08:36.945 13:51:25 unittest.unittest_util -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.945 13:51:25 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:36.945 ************************************ 00:08:36.945 END TEST unittest_util 00:08:36.945 ************************************ 00:08:36.945 13:51:25 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:36.945 13:51:25 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:36.945 ************************************ 00:08:36.945 START TEST unittest_vhost 00:08:36.945 ************************************ 00:08:36.945 13:51:25 unittest.unittest_vhost -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:36.945 00:08:36.945 00:08:36.945 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.945 http://cunit.sourceforge.net/ 00:08:36.945 00:08:36.945 00:08:36.945 Suite: vhost_suite 00:08:36.945 Test: desc_to_iov_test ...[2024-07-25 13:51:25.850320] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:36.945 passed 00:08:36.945 Test: create_controller_test ...[2024-07-25 13:51:25.855057] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:36.945 [2024-07-25 13:51:25.855194] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:36.945 [2024-07-25 13:51:25.855345] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:36.945 [2024-07-25 13:51:25.855456] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:36.945 [2024-07-25 13:51:25.855511] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:36.945 [2024-07-25 13:51:25.855965] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:36.945 [2024-07-25 13:51:25.857003] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:36.945 passed 00:08:36.945 Test: session_find_by_vid_test ...passed 00:08:36.945 Test: remove_controller_test ...[2024-07-25 13:51:25.859119] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:36.945 passed 00:08:36.945 Test: vq_avail_ring_get_test ...passed 00:08:36.945 Test: vq_packed_ring_test ...passed 00:08:36.945 Test: vhost_blk_construct_test ...passed 00:08:36.945 00:08:36.945 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.945 suites 1 1 n/a 0 0 00:08:36.945 tests 7 7 7 0 0 00:08:36.945 asserts 147 147 147 0 n/a 00:08:36.945 00:08:36.945 Elapsed time = 0.013 seconds 00:08:36.945 00:08:36.945 real 0m0.054s 00:08:36.945 user 0m0.033s 00:08:36.945 sys 0m0.021s 00:08:36.945 13:51:25 unittest.unittest_vhost -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.945 13:51:25 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:36.945 ************************************ 00:08:36.945 END TEST unittest_vhost 00:08:36.945 ************************************ 00:08:36.945 13:51:25 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.945 13:51:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:36.945 ************************************ 00:08:36.945 START TEST unittest_dma 00:08:36.945 ************************************ 00:08:36.945 13:51:25 unittest.unittest_dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:36.945 00:08:36.945 00:08:36.945 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.945 http://cunit.sourceforge.net/ 00:08:36.945 00:08:36.945 00:08:36.945 Suite: dma_suite 00:08:36.945 Test: test_dma ...[2024-07-25 13:51:25.945084] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:36.945 passed 00:08:36.945 00:08:36.945 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.945 suites 1 1 n/a 0 0 00:08:36.945 tests 1 1 1 0 0 00:08:36.945 asserts 54 54 54 0 n/a 00:08:36.945 00:08:36.945 Elapsed time = 0.001 seconds 00:08:36.945 00:08:36.945 real 0m0.027s 00:08:36.945 user 0m0.015s 00:08:36.945 sys 0m0.013s 00:08:36.945 13:51:25 unittest.unittest_dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.945 13:51:25 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:36.945 ************************************ 00:08:36.945 END TEST unittest_dma 00:08:36.946 ************************************ 00:08:37.205 13:51:25 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:37.205 13:51:25 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.205 13:51:25 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.205 13:51:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:37.205 ************************************ 00:08:37.205 START TEST unittest_init 00:08:37.205 ************************************ 00:08:37.205 13:51:26 unittest.unittest_init -- common/autotest_common.sh@1125 -- # unittest_init 00:08:37.205 13:51:26 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:37.205 00:08:37.205 00:08:37.205 CUnit - A unit testing framework for C - Version 2.1-3 00:08:37.205 http://cunit.sourceforge.net/ 00:08:37.205 00:08:37.205 00:08:37.205 Suite: subsystem_suite 00:08:37.205 Test: subsystem_sort_test_depends_on_single ...passed 00:08:37.205 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:37.205 Test: subsystem_sort_test_missing_dependency ...[2024-07-25 13:51:26.021780] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:37.205 passed 00:08:37.205 00:08:37.205 [2024-07-25 13:51:26.022125] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:37.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:37.205 suites 1 1 n/a 0 0 00:08:37.205 tests 3 3 3 0 0 00:08:37.205 asserts 20 20 20 0 n/a 00:08:37.205 00:08:37.205 Elapsed time = 0.001 seconds 00:08:37.205 00:08:37.205 real 0m0.033s 00:08:37.205 user 0m0.012s 00:08:37.205 sys 0m0.021s 00:08:37.205 13:51:26 unittest.unittest_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.205 13:51:26 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:37.205 ************************************ 00:08:37.205 END TEST unittest_init 00:08:37.205 ************************************ 00:08:37.205 13:51:26 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:37.205 13:51:26 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.205 13:51:26 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.205 13:51:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:37.205 ************************************ 00:08:37.205 START TEST unittest_keyring 00:08:37.205 ************************************ 00:08:37.205 13:51:26 unittest.unittest_keyring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:37.205 00:08:37.205 00:08:37.205 CUnit - A unit testing framework for C - Version 2.1-3 00:08:37.205 http://cunit.sourceforge.net/ 00:08:37.205 00:08:37.205 00:08:37.205 Suite: keyring 00:08:37.205 Test: test_keyring_add_remove ...[2024-07-25 13:51:26.106192] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:37.205 [2024-07-25 13:51:26.106694] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:37.205 [2024-07-25 13:51:26.106815] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:37.205 passed 00:08:37.205 Test: test_keyring_get_put ...passed 00:08:37.205 00:08:37.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:37.205 suites 1 1 n/a 0 0 00:08:37.205 tests 2 2 2 0 0 00:08:37.205 asserts 44 44 44 0 n/a 00:08:37.205 00:08:37.205 Elapsed time = 0.001 seconds 00:08:37.205 00:08:37.205 real 0m0.031s 00:08:37.205 user 0m0.016s 00:08:37.205 sys 0m0.015s 00:08:37.205 13:51:26 unittest.unittest_keyring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.205 13:51:26 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:37.205 ************************************ 00:08:37.205 END TEST unittest_keyring 00:08:37.205 ************************************ 00:08:37.205 13:51:26 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:37.205 13:51:26 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:37.205 13:51:26 unittest -- unit/unittest.sh@293 -- # hostname 00:08:37.205 13:51:26 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:37.535 geninfo: WARNING: invalid characters removed from testname! 00:09:09.599 13:51:56 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:13.781 13:52:02 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:17.103 13:52:05 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:20.450 13:52:08 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:23.749 13:52:12 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:26.287 13:52:15 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:29.571 13:52:18 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:32.100 13:52:20 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:32.100 13:52:20 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:33.033 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:33.033 Found 322 entries. 00:09:33.033 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:33.033 Writing .css and .png files. 00:09:33.033 Generating output. 00:09:33.033 Processing file include/linux/virtio_ring.h 00:09:33.291 Processing file include/spdk/thread.h 00:09:33.291 Processing file include/spdk/bdev_module.h 00:09:33.291 Processing file include/spdk/trace.h 00:09:33.291 Processing file include/spdk/util.h 00:09:33.291 Processing file include/spdk/nvme.h 00:09:33.292 Processing file include/spdk/nvmf_transport.h 00:09:33.292 Processing file include/spdk/base64.h 00:09:33.292 Processing file include/spdk/mmio.h 00:09:33.292 Processing file include/spdk/histogram_data.h 00:09:33.292 Processing file include/spdk/nvme_spec.h 00:09:33.292 Processing file include/spdk/endian.h 00:09:33.292 Processing file include/spdk_internal/sgl.h 00:09:33.292 Processing file include/spdk_internal/sock.h 00:09:33.292 Processing file include/spdk_internal/nvme_tcp.h 00:09:33.292 Processing file include/spdk_internal/rdma_utils.h 00:09:33.292 Processing file include/spdk_internal/utf.h 00:09:33.292 Processing file include/spdk_internal/virtio.h 00:09:33.551 Processing file lib/accel/accel.c 00:09:33.551 Processing file lib/accel/accel_sw.c 00:09:33.551 Processing file lib/accel/accel_rpc.c 00:09:33.809 Processing file lib/bdev/bdev.c 00:09:33.809 Processing file lib/bdev/bdev_rpc.c 00:09:33.809 Processing file lib/bdev/part.c 00:09:33.809 Processing file lib/bdev/bdev_zone.c 00:09:33.809 Processing file lib/bdev/scsi_nvme.c 00:09:34.067 Processing file lib/blob/blobstore.c 00:09:34.067 Processing file lib/blob/zeroes.c 00:09:34.067 Processing file lib/blob/blob_bs_dev.c 00:09:34.067 Processing file lib/blob/request.c 00:09:34.067 Processing file lib/blob/blobstore.h 00:09:34.067 Processing file lib/blobfs/blobfs.c 00:09:34.067 Processing file lib/blobfs/tree.c 00:09:34.326 Processing file lib/conf/conf.c 00:09:34.326 Processing file lib/dma/dma.c 00:09:34.582 Processing file lib/env_dpdk/pci_virtio.c 00:09:34.582 Processing file lib/env_dpdk/init.c 00:09:34.582 Processing file lib/env_dpdk/pci_vmd.c 00:09:34.582 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:34.582 Processing file lib/env_dpdk/memory.c 00:09:34.582 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:34.582 Processing file lib/env_dpdk/pci_idxd.c 00:09:34.582 Processing file lib/env_dpdk/pci_event.c 00:09:34.582 Processing file lib/env_dpdk/pci_dpdk.c 00:09:34.582 Processing file lib/env_dpdk/threads.c 00:09:34.582 Processing file lib/env_dpdk/pci.c 00:09:34.582 Processing file lib/env_dpdk/env.c 00:09:34.582 Processing file lib/env_dpdk/sigbus_handler.c 00:09:34.582 Processing file lib/env_dpdk/pci_ioat.c 00:09:34.840 Processing file lib/event/scheduler_static.c 00:09:34.840 Processing file lib/event/reactor.c 00:09:34.840 Processing file lib/event/log_rpc.c 00:09:34.840 Processing file lib/event/app_rpc.c 00:09:34.840 Processing file lib/event/app.c 00:09:35.406 Processing file lib/ftl/ftl_trace.c 00:09:35.406 Processing file lib/ftl/ftl_layout.c 00:09:35.406 Processing file lib/ftl/ftl_band_ops.c 00:09:35.406 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:35.406 Processing file lib/ftl/ftl_l2p_cache.c 00:09:35.406 Processing file lib/ftl/ftl_core.c 00:09:35.406 Processing file lib/ftl/ftl_band.h 00:09:35.406 Processing file lib/ftl/ftl_sb.c 00:09:35.406 Processing file lib/ftl/ftl_io.c 00:09:35.406 Processing file lib/ftl/ftl_l2p.c 00:09:35.406 Processing file lib/ftl/ftl_writer.c 00:09:35.406 Processing file lib/ftl/ftl_reloc.c 00:09:35.406 Processing file lib/ftl/ftl_p2l.c 00:09:35.406 Processing file lib/ftl/ftl_rq.c 00:09:35.406 Processing file lib/ftl/ftl_nv_cache.c 00:09:35.406 Processing file lib/ftl/ftl_band.c 00:09:35.406 Processing file lib/ftl/ftl_io.h 00:09:35.406 Processing file lib/ftl/ftl_debug.h 00:09:35.406 Processing file lib/ftl/ftl_init.c 00:09:35.406 Processing file lib/ftl/ftl_debug.c 00:09:35.406 Processing file lib/ftl/ftl_writer.h 00:09:35.406 Processing file lib/ftl/ftl_l2p_flat.c 00:09:35.406 Processing file lib/ftl/ftl_nv_cache.h 00:09:35.406 Processing file lib/ftl/ftl_core.h 00:09:35.406 Processing file lib/ftl/base/ftl_base_dev.c 00:09:35.406 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:35.664 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:35.664 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:35.664 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:35.923 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:36.180 Processing file lib/ftl/utils/ftl_conf.c 00:09:36.180 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:36.180 Processing file lib/ftl/utils/ftl_property.c 00:09:36.180 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:36.180 Processing file lib/ftl/utils/ftl_md.c 00:09:36.180 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:36.180 Processing file lib/ftl/utils/ftl_mempool.c 00:09:36.180 Processing file lib/ftl/utils/ftl_property.h 00:09:36.180 Processing file lib/ftl/utils/ftl_df.h 00:09:36.180 Processing file lib/idxd/idxd_internal.h 00:09:36.180 Processing file lib/idxd/idxd.c 00:09:36.180 Processing file lib/idxd/idxd_user.c 00:09:36.180 Processing file lib/init/subsystem_rpc.c 00:09:36.180 Processing file lib/init/subsystem.c 00:09:36.180 Processing file lib/init/rpc.c 00:09:36.180 Processing file lib/init/json_config.c 00:09:36.439 Processing file lib/ioat/ioat.c 00:09:36.439 Processing file lib/ioat/ioat_internal.h 00:09:36.697 Processing file lib/iscsi/portal_grp.c 00:09:36.697 Processing file lib/iscsi/task.c 00:09:36.697 Processing file lib/iscsi/init_grp.c 00:09:36.697 Processing file lib/iscsi/task.h 00:09:36.697 Processing file lib/iscsi/iscsi_rpc.c 00:09:36.697 Processing file lib/iscsi/md5.c 00:09:36.697 Processing file lib/iscsi/iscsi.c 00:09:36.697 Processing file lib/iscsi/iscsi_subsystem.c 00:09:36.697 Processing file lib/iscsi/conn.c 00:09:36.697 Processing file lib/iscsi/tgt_node.c 00:09:36.697 Processing file lib/iscsi/iscsi.h 00:09:36.697 Processing file lib/iscsi/param.c 00:09:36.955 Processing file lib/json/json_parse.c 00:09:36.955 Processing file lib/json/json_write.c 00:09:36.955 Processing file lib/json/json_util.c 00:09:36.955 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:36.955 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:36.955 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:36.955 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:36.955 Processing file lib/keyring/keyring_rpc.c 00:09:36.955 Processing file lib/keyring/keyring.c 00:09:37.213 Processing file lib/log/log_deprecated.c 00:09:37.213 Processing file lib/log/log.c 00:09:37.213 Processing file lib/log/log_flags.c 00:09:37.213 Processing file lib/lvol/lvol.c 00:09:37.213 Processing file lib/nbd/nbd.c 00:09:37.213 Processing file lib/nbd/nbd_rpc.c 00:09:37.471 Processing file lib/notify/notify_rpc.c 00:09:37.471 Processing file lib/notify/notify.c 00:09:38.038 Processing file lib/nvme/nvme_cuse.c 00:09:38.038 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:38.038 Processing file lib/nvme/nvme_discovery.c 00:09:38.038 Processing file lib/nvme/nvme_pcie_internal.h 00:09:38.038 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:38.038 Processing file lib/nvme/nvme.c 00:09:38.038 Processing file lib/nvme/nvme_internal.h 00:09:38.038 Processing file lib/nvme/nvme_auth.c 00:09:38.038 Processing file lib/nvme/nvme_poll_group.c 00:09:38.038 Processing file lib/nvme/nvme_io_msg.c 00:09:38.038 Processing file lib/nvme/nvme_ns.c 00:09:38.038 Processing file lib/nvme/nvme_zns.c 00:09:38.038 Processing file lib/nvme/nvme_tcp.c 00:09:38.038 Processing file lib/nvme/nvme_qpair.c 00:09:38.038 Processing file lib/nvme/nvme_opal.c 00:09:38.038 Processing file lib/nvme/nvme_rdma.c 00:09:38.038 Processing file lib/nvme/nvme_ns_cmd.c 00:09:38.038 Processing file lib/nvme/nvme_ctrlr.c 00:09:38.038 Processing file lib/nvme/nvme_fabric.c 00:09:38.038 Processing file lib/nvme/nvme_pcie_common.c 00:09:38.038 Processing file lib/nvme/nvme_transport.c 00:09:38.038 Processing file lib/nvme/nvme_quirks.c 00:09:38.038 Processing file lib/nvme/nvme_pcie.c 00:09:38.038 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:38.651 Processing file lib/nvmf/transport.c 00:09:38.651 Processing file lib/nvmf/auth.c 00:09:38.651 Processing file lib/nvmf/subsystem.c 00:09:38.651 Processing file lib/nvmf/tcp.c 00:09:38.651 Processing file lib/nvmf/nvmf.c 00:09:38.651 Processing file lib/nvmf/nvmf_rpc.c 00:09:38.651 Processing file lib/nvmf/ctrlr.c 00:09:38.651 Processing file lib/nvmf/rdma.c 00:09:38.651 Processing file lib/nvmf/ctrlr_bdev.c 00:09:38.651 Processing file lib/nvmf/nvmf_internal.h 00:09:38.651 Processing file lib/nvmf/ctrlr_discovery.c 00:09:38.651 Processing file lib/rdma_provider/common.c 00:09:38.651 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:09:38.909 Processing file lib/rdma_utils/rdma_utils.c 00:09:38.909 Processing file lib/rpc/rpc.c 00:09:39.167 Processing file lib/scsi/scsi_rpc.c 00:09:39.167 Processing file lib/scsi/scsi.c 00:09:39.167 Processing file lib/scsi/task.c 00:09:39.167 Processing file lib/scsi/dev.c 00:09:39.167 Processing file lib/scsi/scsi_bdev.c 00:09:39.167 Processing file lib/scsi/scsi_pr.c 00:09:39.167 Processing file lib/scsi/lun.c 00:09:39.167 Processing file lib/scsi/port.c 00:09:39.167 Processing file lib/sock/sock.c 00:09:39.167 Processing file lib/sock/sock_rpc.c 00:09:39.167 Processing file lib/thread/iobuf.c 00:09:39.167 Processing file lib/thread/thread.c 00:09:39.425 Processing file lib/trace/trace_rpc.c 00:09:39.425 Processing file lib/trace/trace.c 00:09:39.425 Processing file lib/trace/trace_flags.c 00:09:39.425 Processing file lib/trace_parser/trace.cpp 00:09:39.425 Processing file lib/ut/ut.c 00:09:39.425 Processing file lib/ut_mock/mock.c 00:09:39.991 Processing file lib/util/dif.c 00:09:39.991 Processing file lib/util/crc32c.c 00:09:39.991 Processing file lib/util/fd.c 00:09:39.991 Processing file lib/util/iov.c 00:09:39.991 Processing file lib/util/file.c 00:09:39.991 Processing file lib/util/xor.c 00:09:39.991 Processing file lib/util/hexlify.c 00:09:39.991 Processing file lib/util/string.c 00:09:39.991 Processing file lib/util/fd_group.c 00:09:39.991 Processing file lib/util/bit_array.c 00:09:39.991 Processing file lib/util/base64.c 00:09:39.991 Processing file lib/util/crc32.c 00:09:39.991 Processing file lib/util/crc16.c 00:09:39.991 Processing file lib/util/math.c 00:09:39.991 Processing file lib/util/zipf.c 00:09:39.991 Processing file lib/util/pipe.c 00:09:39.991 Processing file lib/util/net.c 00:09:39.991 Processing file lib/util/crc32_ieee.c 00:09:39.991 Processing file lib/util/cpuset.c 00:09:39.991 Processing file lib/util/uuid.c 00:09:39.991 Processing file lib/util/crc64.c 00:09:39.991 Processing file lib/util/strerror_tls.c 00:09:39.991 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:39.991 Processing file lib/vfio_user/host/vfio_user.c 00:09:39.991 Processing file lib/vhost/vhost_scsi.c 00:09:39.991 Processing file lib/vhost/vhost_rpc.c 00:09:39.991 Processing file lib/vhost/vhost_internal.h 00:09:39.991 Processing file lib/vhost/rte_vhost_user.c 00:09:39.991 Processing file lib/vhost/vhost_blk.c 00:09:39.991 Processing file lib/vhost/vhost.c 00:09:40.250 Processing file lib/virtio/virtio.c 00:09:40.250 Processing file lib/virtio/virtio_vfio_user.c 00:09:40.250 Processing file lib/virtio/virtio_pci.c 00:09:40.250 Processing file lib/virtio/virtio_vhost_user.c 00:09:40.250 Processing file lib/vmd/vmd.c 00:09:40.250 Processing file lib/vmd/led.c 00:09:40.250 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:40.250 Processing file module/accel/dsa/accel_dsa.c 00:09:40.507 Processing file module/accel/error/accel_error.c 00:09:40.507 Processing file module/accel/error/accel_error_rpc.c 00:09:40.507 Processing file module/accel/iaa/accel_iaa.c 00:09:40.507 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:40.507 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:40.507 Processing file module/accel/ioat/accel_ioat.c 00:09:40.507 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:40.507 Processing file module/bdev/aio/bdev_aio.c 00:09:40.765 Processing file module/bdev/delay/vbdev_delay.c 00:09:40.765 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:40.765 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:40.765 Processing file module/bdev/error/vbdev_error.c 00:09:40.765 Processing file module/bdev/ftl/bdev_ftl.c 00:09:40.765 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:41.022 Processing file module/bdev/gpt/gpt.c 00:09:41.022 Processing file module/bdev/gpt/gpt.h 00:09:41.022 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:41.022 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:41.022 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:41.022 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:41.022 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:41.309 Processing file module/bdev/malloc/bdev_malloc.c 00:09:41.309 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:41.309 Processing file module/bdev/null/bdev_null_rpc.c 00:09:41.309 Processing file module/bdev/null/bdev_null.c 00:09:41.566 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:41.566 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:41.566 Processing file module/bdev/nvme/bdev_nvme.c 00:09:41.566 Processing file module/bdev/nvme/vbdev_opal.c 00:09:41.566 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:41.566 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:41.566 Processing file module/bdev/nvme/nvme_rpc.c 00:09:41.566 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:41.566 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:41.824 Processing file module/bdev/raid/raid5f.c 00:09:41.824 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:41.824 Processing file module/bdev/raid/concat.c 00:09:41.824 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:41.824 Processing file module/bdev/raid/bdev_raid.h 00:09:41.824 Processing file module/bdev/raid/bdev_raid.c 00:09:41.824 Processing file module/bdev/raid/raid0.c 00:09:41.824 Processing file module/bdev/raid/raid1.c 00:09:41.824 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:41.824 Processing file module/bdev/split/vbdev_split.c 00:09:41.824 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:41.824 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:41.824 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:42.082 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:42.082 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:42.082 Processing file module/blob/bdev/blob_bdev.c 00:09:42.082 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:42.082 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:42.082 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:42.339 Processing file module/event/subsystems/accel/accel.c 00:09:42.339 Processing file module/event/subsystems/bdev/bdev.c 00:09:42.339 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:42.339 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:42.339 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:42.339 Processing file module/event/subsystems/keyring/keyring.c 00:09:42.597 Processing file module/event/subsystems/nbd/nbd.c 00:09:42.597 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:42.597 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:42.597 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:42.597 Processing file module/event/subsystems/scsi/scsi.c 00:09:42.855 Processing file module/event/subsystems/sock/sock.c 00:09:42.855 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:42.855 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:42.855 Processing file module/event/subsystems/vmd/vmd.c 00:09:42.855 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:43.113 Processing file module/keyring/file/keyring_rpc.c 00:09:43.113 Processing file module/keyring/file/keyring.c 00:09:43.113 Processing file module/keyring/linux/keyring.c 00:09:43.113 Processing file module/keyring/linux/keyring_rpc.c 00:09:43.113 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:43.113 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:43.371 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:43.371 Processing file module/sock/posix/posix.c 00:09:43.371 Writing directory view page. 00:09:43.371 Overall coverage rate: 00:09:43.371 lines......: 38.7% (41103 of 106207 lines) 00:09:43.371 functions..: 42.3% (3741 of 8834 functions) 00:09:43.371 00:09:43.371 00:09:43.371 ===================== 00:09:43.371 All unit tests passed 00:09:43.371 ===================== 00:09:43.371 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:43.371 13:52:32 unittest -- unit/unittest.sh@305 -- # set +x 00:09:43.371 00:09:43.371 00:09:43.371 ************************************ 00:09:43.371 END TEST unittest 00:09:43.371 ************************************ 00:09:43.371 00:09:43.371 real 3m59.567s 00:09:43.371 user 3m30.981s 00:09:43.371 sys 0m19.675s 00:09:43.371 13:52:32 unittest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.371 13:52:32 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:43.371 13:52:32 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:43.371 13:52:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:43.371 13:52:32 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:43.371 13:52:32 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:43.371 13:52:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.371 13:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:43.371 13:52:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:43.371 13:52:32 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:43.371 13:52:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.371 13:52:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.371 13:52:32 -- common/autotest_common.sh@10 -- # set +x 00:09:43.371 ************************************ 00:09:43.371 START TEST env 00:09:43.371 ************************************ 00:09:43.371 13:52:32 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:43.629 * Looking for test storage... 00:09:43.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:43.629 13:52:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:43.629 13:52:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.629 13:52:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.629 13:52:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.629 ************************************ 00:09:43.629 START TEST env_memory 00:09:43.629 ************************************ 00:09:43.629 13:52:32 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:43.629 00:09:43.629 00:09:43.629 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.629 http://cunit.sourceforge.net/ 00:09:43.629 00:09:43.629 00:09:43.629 Suite: memory 00:09:43.629 Test: alloc and free memory map ...[2024-07-25 13:52:32.480351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:43.629 passed 00:09:43.629 Test: mem map translation ...[2024-07-25 13:52:32.520990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:43.629 [2024-07-25 13:52:32.521270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:43.629 [2024-07-25 13:52:32.521481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:43.629 [2024-07-25 13:52:32.521658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:43.629 passed 00:09:43.629 Test: mem map registration ...[2024-07-25 13:52:32.591780] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:43.629 [2024-07-25 13:52:32.592072] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:43.629 passed 00:09:43.888 Test: mem map adjacent registrations ...passed 00:09:43.888 00:09:43.888 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.888 suites 1 1 n/a 0 0 00:09:43.888 tests 4 4 4 0 0 00:09:43.888 asserts 152 152 152 0 n/a 00:09:43.888 00:09:43.888 Elapsed time = 0.241 seconds 00:09:43.888 00:09:43.888 real 0m0.272s 00:09:43.888 user 0m0.234s 00:09:43.888 sys 0m0.036s 00:09:43.888 13:52:32 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.888 13:52:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:43.888 ************************************ 00:09:43.888 END TEST env_memory 00:09:43.888 ************************************ 00:09:43.888 13:52:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:43.888 13:52:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.888 13:52:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.888 13:52:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.888 ************************************ 00:09:43.888 START TEST env_vtophys 00:09:43.888 ************************************ 00:09:43.888 13:52:32 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:43.888 EAL: lib.eal log level changed from notice to debug 00:09:43.888 EAL: Detected lcore 0 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 1 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 2 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 3 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 4 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 5 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 6 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 7 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 8 as core 0 on socket 0 00:09:43.888 EAL: Detected lcore 9 as core 0 on socket 0 00:09:43.888 EAL: Maximum logical cores by configuration: 128 00:09:43.888 EAL: Detected CPU lcores: 10 00:09:43.888 EAL: Detected NUMA nodes: 1 00:09:43.888 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:43.888 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:43.888 EAL: Checking presence of .so 'librte_eal.so' 00:09:43.888 EAL: Detected static linkage of DPDK 00:09:43.888 EAL: No shared files mode enabled, IPC will be disabled 00:09:43.888 EAL: Selected IOVA mode 'PA' 00:09:43.888 EAL: Probing VFIO support... 00:09:43.888 EAL: IOMMU type 1 (Type 1) is supported 00:09:43.888 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:43.888 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:43.888 EAL: VFIO support initialized 00:09:43.888 EAL: Ask a virtual area of 0x2e000 bytes 00:09:43.888 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:43.888 EAL: Setting up physically contiguous memory... 00:09:43.888 EAL: Setting maximum number of open files to 1048576 00:09:43.888 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:43.888 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:43.888 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.888 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:43.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.888 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.888 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:43.888 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:43.888 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.888 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:43.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.888 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.888 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:43.888 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:43.888 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.888 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:43.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.888 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.888 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:43.888 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:43.888 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.888 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:43.888 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.888 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.888 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:43.888 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:43.888 EAL: Hugepages will be freed exactly as allocated. 00:09:43.888 EAL: No shared files mode enabled, IPC is disabled 00:09:43.888 EAL: No shared files mode enabled, IPC is disabled 00:09:44.200 EAL: TSC frequency is ~2200000 KHz 00:09:44.200 EAL: Main lcore 0 is ready (tid=7ff54789aa80;cpuset=[0]) 00:09:44.200 EAL: Trying to obtain current memory policy. 00:09:44.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.200 EAL: Restoring previous memory policy: 0 00:09:44.200 EAL: request: mp_malloc_sync 00:09:44.200 EAL: No shared files mode enabled, IPC is disabled 00:09:44.200 EAL: Heap on socket 0 was expanded by 2MB 00:09:44.200 EAL: No shared files mode enabled, IPC is disabled 00:09:44.200 EAL: Mem event callback 'spdk:(nil)' registered 00:09:44.200 00:09:44.200 00:09:44.200 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.200 http://cunit.sourceforge.net/ 00:09:44.200 00:09:44.200 00:09:44.200 Suite: components_suite 00:09:44.458 Test: vtophys_malloc_test ...passed 00:09:44.458 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:44.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.458 EAL: Restoring previous memory policy: 0 00:09:44.458 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.458 EAL: request: mp_malloc_sync 00:09:44.458 EAL: No shared files mode enabled, IPC is disabled 00:09:44.458 EAL: Heap on socket 0 was expanded by 4MB 00:09:44.458 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.458 EAL: request: mp_malloc_sync 00:09:44.458 EAL: No shared files mode enabled, IPC is disabled 00:09:44.458 EAL: Heap on socket 0 was shrunk by 4MB 00:09:44.458 EAL: Trying to obtain current memory policy. 00:09:44.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.458 EAL: Restoring previous memory policy: 0 00:09:44.458 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.458 EAL: request: mp_malloc_sync 00:09:44.458 EAL: No shared files mode enabled, IPC is disabled 00:09:44.458 EAL: Heap on socket 0 was expanded by 6MB 00:09:44.458 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.458 EAL: request: mp_malloc_sync 00:09:44.459 EAL: No shared files mode enabled, IPC is disabled 00:09:44.459 EAL: Heap on socket 0 was shrunk by 6MB 00:09:44.459 EAL: Trying to obtain current memory policy. 00:09:44.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.459 EAL: Restoring previous memory policy: 0 00:09:44.459 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.459 EAL: request: mp_malloc_sync 00:09:44.459 EAL: No shared files mode enabled, IPC is disabled 00:09:44.459 EAL: Heap on socket 0 was expanded by 10MB 00:09:44.459 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.459 EAL: request: mp_malloc_sync 00:09:44.459 EAL: No shared files mode enabled, IPC is disabled 00:09:44.459 EAL: Heap on socket 0 was shrunk by 10MB 00:09:44.716 EAL: Trying to obtain current memory policy. 00:09:44.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.716 EAL: Restoring previous memory policy: 0 00:09:44.716 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.716 EAL: request: mp_malloc_sync 00:09:44.716 EAL: No shared files mode enabled, IPC is disabled 00:09:44.716 EAL: Heap on socket 0 was expanded by 18MB 00:09:44.716 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.716 EAL: request: mp_malloc_sync 00:09:44.716 EAL: No shared files mode enabled, IPC is disabled 00:09:44.716 EAL: Heap on socket 0 was shrunk by 18MB 00:09:44.716 EAL: Trying to obtain current memory policy. 00:09:44.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.716 EAL: Restoring previous memory policy: 0 00:09:44.716 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.716 EAL: request: mp_malloc_sync 00:09:44.716 EAL: No shared files mode enabled, IPC is disabled 00:09:44.716 EAL: Heap on socket 0 was expanded by 34MB 00:09:44.716 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.716 EAL: request: mp_malloc_sync 00:09:44.716 EAL: No shared files mode enabled, IPC is disabled 00:09:44.716 EAL: Heap on socket 0 was shrunk by 34MB 00:09:44.716 EAL: Trying to obtain current memory policy. 00:09:44.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.716 EAL: Restoring previous memory policy: 0 00:09:44.716 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.716 EAL: request: mp_malloc_sync 00:09:44.716 EAL: No shared files mode enabled, IPC is disabled 00:09:44.716 EAL: Heap on socket 0 was expanded by 66MB 00:09:44.973 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.973 EAL: request: mp_malloc_sync 00:09:44.973 EAL: No shared files mode enabled, IPC is disabled 00:09:44.973 EAL: Heap on socket 0 was shrunk by 66MB 00:09:44.973 EAL: Trying to obtain current memory policy. 00:09:44.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.973 EAL: Restoring previous memory policy: 0 00:09:44.973 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.973 EAL: request: mp_malloc_sync 00:09:44.973 EAL: No shared files mode enabled, IPC is disabled 00:09:44.974 EAL: Heap on socket 0 was expanded by 130MB 00:09:45.231 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.231 EAL: request: mp_malloc_sync 00:09:45.231 EAL: No shared files mode enabled, IPC is disabled 00:09:45.231 EAL: Heap on socket 0 was shrunk by 130MB 00:09:45.488 EAL: Trying to obtain current memory policy. 00:09:45.488 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.488 EAL: Restoring previous memory policy: 0 00:09:45.488 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.488 EAL: request: mp_malloc_sync 00:09:45.488 EAL: No shared files mode enabled, IPC is disabled 00:09:45.488 EAL: Heap on socket 0 was expanded by 258MB 00:09:46.054 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.054 EAL: request: mp_malloc_sync 00:09:46.054 EAL: No shared files mode enabled, IPC is disabled 00:09:46.054 EAL: Heap on socket 0 was shrunk by 258MB 00:09:46.313 EAL: Trying to obtain current memory policy. 00:09:46.313 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:46.571 EAL: Restoring previous memory policy: 0 00:09:46.571 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.571 EAL: request: mp_malloc_sync 00:09:46.571 EAL: No shared files mode enabled, IPC is disabled 00:09:46.571 EAL: Heap on socket 0 was expanded by 514MB 00:09:47.506 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.506 EAL: request: mp_malloc_sync 00:09:47.506 EAL: No shared files mode enabled, IPC is disabled 00:09:47.506 EAL: Heap on socket 0 was shrunk by 514MB 00:09:48.071 EAL: Trying to obtain current memory policy. 00:09:48.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.330 EAL: Restoring previous memory policy: 0 00:09:48.330 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.330 EAL: request: mp_malloc_sync 00:09:48.330 EAL: No shared files mode enabled, IPC is disabled 00:09:48.330 EAL: Heap on socket 0 was expanded by 1026MB 00:09:50.228 EAL: Calling mem event callback 'spdk:(nil)' 00:09:50.228 EAL: request: mp_malloc_sync 00:09:50.228 EAL: No shared files mode enabled, IPC is disabled 00:09:50.228 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:51.603 passed 00:09:51.603 00:09:51.603 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.603 suites 1 1 n/a 0 0 00:09:51.603 tests 2 2 2 0 0 00:09:51.603 asserts 6335 6335 6335 0 n/a 00:09:51.603 00:09:51.603 Elapsed time = 7.539 seconds 00:09:51.603 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.603 EAL: request: mp_malloc_sync 00:09:51.603 EAL: No shared files mode enabled, IPC is disabled 00:09:51.603 EAL: Heap on socket 0 was shrunk by 2MB 00:09:51.603 EAL: No shared files mode enabled, IPC is disabled 00:09:51.603 EAL: No shared files mode enabled, IPC is disabled 00:09:51.603 EAL: No shared files mode enabled, IPC is disabled 00:09:51.603 ************************************ 00:09:51.603 END TEST env_vtophys 00:09:51.603 ************************************ 00:09:51.603 00:09:51.603 real 0m7.867s 00:09:51.603 user 0m6.668s 00:09:51.603 sys 0m1.033s 00:09:51.603 13:52:40 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.603 13:52:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:51.862 13:52:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.862 13:52:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.862 13:52:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.862 13:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:09:51.862 ************************************ 00:09:51.862 START TEST env_pci 00:09:51.862 ************************************ 00:09:51.862 13:52:40 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.862 00:09:51.862 00:09:51.862 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.862 http://cunit.sourceforge.net/ 00:09:51.862 00:09:51.862 00:09:51.862 Suite: pci 00:09:51.862 Test: pci_hook ...[2024-07-25 13:52:40.719247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 112032 has claimed it 00:09:51.862 EAL: Cannot find device (10000:00:01.0) 00:09:51.862 EAL: Failed to attach device on primary process 00:09:51.862 passed 00:09:51.862 00:09:51.862 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.862 suites 1 1 n/a 0 0 00:09:51.862 tests 1 1 1 0 0 00:09:51.862 asserts 25 25 25 0 n/a 00:09:51.862 00:09:51.862 Elapsed time = 0.005 seconds 00:09:51.862 00:09:51.862 real 0m0.092s 00:09:51.862 user 0m0.044s 00:09:51.862 sys 0m0.047s 00:09:51.862 13:52:40 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.862 13:52:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:51.862 ************************************ 00:09:51.862 END TEST env_pci 00:09:51.862 ************************************ 00:09:51.862 13:52:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:51.862 13:52:40 env -- env/env.sh@15 -- # uname 00:09:51.862 13:52:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:51.862 13:52:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:51.862 13:52:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:51.862 13:52:40 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.862 13:52:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.862 13:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:09:51.862 ************************************ 00:09:51.862 START TEST env_dpdk_post_init 00:09:51.862 ************************************ 00:09:51.862 13:52:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:51.862 EAL: Detected CPU lcores: 10 00:09:51.862 EAL: Detected NUMA nodes: 1 00:09:51.862 EAL: Detected static linkage of DPDK 00:09:52.120 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:52.120 EAL: Selected IOVA mode 'PA' 00:09:52.120 EAL: VFIO support initialized 00:09:52.120 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:52.120 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:52.120 Starting DPDK initialization... 00:09:52.120 Starting SPDK post initialization... 00:09:52.120 SPDK NVMe probe 00:09:52.120 Attaching to 0000:00:10.0 00:09:52.120 Attached to 0000:00:10.0 00:09:52.120 Cleaning up... 00:09:52.120 ************************************ 00:09:52.120 END TEST env_dpdk_post_init 00:09:52.120 ************************************ 00:09:52.120 00:09:52.120 real 0m0.303s 00:09:52.120 user 0m0.089s 00:09:52.120 sys 0m0.116s 00:09:52.120 13:52:41 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.120 13:52:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:52.379 13:52:41 env -- env/env.sh@26 -- # uname 00:09:52.379 13:52:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:52.379 13:52:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.379 13:52:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.379 13:52:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.379 13:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:09:52.379 ************************************ 00:09:52.379 START TEST env_mem_callbacks 00:09:52.379 ************************************ 00:09:52.379 13:52:41 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.379 EAL: Detected CPU lcores: 10 00:09:52.379 EAL: Detected NUMA nodes: 1 00:09:52.379 EAL: Detected static linkage of DPDK 00:09:52.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:52.379 EAL: Selected IOVA mode 'PA' 00:09:52.379 EAL: VFIO support initialized 00:09:52.379 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:52.379 00:09:52.379 00:09:52.379 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.379 http://cunit.sourceforge.net/ 00:09:52.379 00:09:52.379 00:09:52.379 Suite: memory 00:09:52.379 Test: test ... 00:09:52.379 register 0x200000200000 2097152 00:09:52.379 malloc 3145728 00:09:52.379 register 0x200000400000 4194304 00:09:52.379 buf 0x2000004fffc0 len 3145728 PASSED 00:09:52.379 malloc 64 00:09:52.379 buf 0x2000004ffec0 len 64 PASSED 00:09:52.379 malloc 4194304 00:09:52.379 register 0x200000800000 6291456 00:09:52.379 buf 0x2000009fffc0 len 4194304 PASSED 00:09:52.379 free 0x2000004fffc0 3145728 00:09:52.379 free 0x2000004ffec0 64 00:09:52.379 unregister 0x200000400000 4194304 PASSED 00:09:52.379 free 0x2000009fffc0 4194304 00:09:52.379 unregister 0x200000800000 6291456 PASSED 00:09:52.379 malloc 8388608 00:09:52.379 register 0x200000400000 10485760 00:09:52.637 buf 0x2000005fffc0 len 8388608 PASSED 00:09:52.637 free 0x2000005fffc0 8388608 00:09:52.637 unregister 0x200000400000 10485760 PASSED 00:09:52.637 passed 00:09:52.637 00:09:52.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.637 suites 1 1 n/a 0 0 00:09:52.637 tests 1 1 1 0 0 00:09:52.637 asserts 15 15 15 0 n/a 00:09:52.637 00:09:52.637 Elapsed time = 0.068 seconds 00:09:52.637 ************************************ 00:09:52.637 END TEST env_mem_callbacks 00:09:52.637 ************************************ 00:09:52.637 00:09:52.637 real 0m0.305s 00:09:52.637 user 0m0.129s 00:09:52.637 sys 0m0.077s 00:09:52.637 13:52:41 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.637 13:52:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:52.637 00:09:52.637 real 0m9.176s 00:09:52.637 user 0m7.349s 00:09:52.637 sys 0m1.446s 00:09:52.637 13:52:41 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.637 13:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:09:52.637 ************************************ 00:09:52.637 END TEST env 00:09:52.637 ************************************ 00:09:52.637 13:52:41 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.637 13:52:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.637 13:52:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.637 13:52:41 -- common/autotest_common.sh@10 -- # set +x 00:09:52.637 ************************************ 00:09:52.637 START TEST rpc 00:09:52.637 ************************************ 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.637 * Looking for test storage... 00:09:52.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:52.637 13:52:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=112164 00:09:52.637 13:52:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.637 13:52:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 112164 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@831 -- # '[' -z 112164 ']' 00:09:52.637 13:52:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.637 13:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 [2024-07-25 13:52:41.754164] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:52.895 [2024-07-25 13:52:41.754992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112164 ] 00:09:52.895 [2024-07-25 13:52:41.930810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.152 [2024-07-25 13:52:42.178359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:53.152 [2024-07-25 13:52:42.178466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 112164' to capture a snapshot of events at runtime. 00:09:53.152 [2024-07-25 13:52:42.178543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.152 [2024-07-25 13:52:42.178576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.152 [2024-07-25 13:52:42.178600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid112164 for offline analysis/debug. 00:09:53.152 [2024-07-25 13:52:42.178698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.086 13:52:42 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.086 13:52:42 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:54.086 13:52:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.087 13:52:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.087 13:52:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:54.087 13:52:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:54.087 13:52:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.087 13:52:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.087 13:52:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 ************************************ 00:09:54.087 START TEST rpc_integrity 00:09:54.087 ************************************ 00:09:54.087 13:52:42 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:54.087 13:52:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.087 13:52:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.087 13:52:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 13:52:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.087 13:52:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.087 13:52:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:54.087 { 00:09:54.087 "name": "Malloc0", 00:09:54.087 "aliases": [ 00:09:54.087 "bc38289d-8cdb-46d4-b7b1-f64adce5f5a7" 00:09:54.087 ], 00:09:54.087 "product_name": "Malloc disk", 00:09:54.087 "block_size": 512, 00:09:54.087 "num_blocks": 16384, 00:09:54.087 "uuid": "bc38289d-8cdb-46d4-b7b1-f64adce5f5a7", 00:09:54.087 "assigned_rate_limits": { 00:09:54.087 "rw_ios_per_sec": 0, 00:09:54.087 "rw_mbytes_per_sec": 0, 00:09:54.087 "r_mbytes_per_sec": 0, 00:09:54.087 "w_mbytes_per_sec": 0 00:09:54.087 }, 00:09:54.087 "claimed": false, 00:09:54.087 "zoned": false, 00:09:54.087 "supported_io_types": { 00:09:54.087 "read": true, 00:09:54.087 "write": true, 00:09:54.087 "unmap": true, 00:09:54.087 "flush": true, 00:09:54.087 "reset": true, 00:09:54.087 "nvme_admin": false, 00:09:54.087 "nvme_io": false, 00:09:54.087 "nvme_io_md": false, 00:09:54.087 "write_zeroes": true, 00:09:54.087 "zcopy": true, 00:09:54.087 "get_zone_info": false, 00:09:54.087 "zone_management": false, 00:09:54.087 "zone_append": false, 00:09:54.087 "compare": false, 00:09:54.087 "compare_and_write": false, 00:09:54.087 "abort": true, 00:09:54.087 "seek_hole": false, 00:09:54.087 "seek_data": false, 00:09:54.087 "copy": true, 00:09:54.087 "nvme_iov_md": false 00:09:54.087 }, 00:09:54.087 "memory_domains": [ 00:09:54.087 { 00:09:54.087 "dma_device_id": "system", 00:09:54.087 "dma_device_type": 1 00:09:54.087 }, 00:09:54.087 { 00:09:54.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.087 "dma_device_type": 2 00:09:54.087 } 00:09:54.087 ], 00:09:54.087 "driver_specific": {} 00:09:54.087 } 00:09:54.087 ]' 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.087 [2024-07-25 13:52:43.114370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:54.087 [2024-07-25 13:52:43.114485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.087 [2024-07-25 13:52:43.114558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.087 [2024-07-25 13:52:43.114596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.087 [2024-07-25 13:52:43.117303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.087 [2024-07-25 13:52:43.117388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:54.087 Passthru0 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.087 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.087 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:54.346 { 00:09:54.346 "name": "Malloc0", 00:09:54.346 "aliases": [ 00:09:54.346 "bc38289d-8cdb-46d4-b7b1-f64adce5f5a7" 00:09:54.346 ], 00:09:54.346 "product_name": "Malloc disk", 00:09:54.346 "block_size": 512, 00:09:54.346 "num_blocks": 16384, 00:09:54.346 "uuid": "bc38289d-8cdb-46d4-b7b1-f64adce5f5a7", 00:09:54.346 "assigned_rate_limits": { 00:09:54.346 "rw_ios_per_sec": 0, 00:09:54.346 "rw_mbytes_per_sec": 0, 00:09:54.346 "r_mbytes_per_sec": 0, 00:09:54.346 "w_mbytes_per_sec": 0 00:09:54.346 }, 00:09:54.346 "claimed": true, 00:09:54.346 "claim_type": "exclusive_write", 00:09:54.346 "zoned": false, 00:09:54.346 "supported_io_types": { 00:09:54.346 "read": true, 00:09:54.346 "write": true, 00:09:54.346 "unmap": true, 00:09:54.346 "flush": true, 00:09:54.346 "reset": true, 00:09:54.346 "nvme_admin": false, 00:09:54.346 "nvme_io": false, 00:09:54.346 "nvme_io_md": false, 00:09:54.346 "write_zeroes": true, 00:09:54.346 "zcopy": true, 00:09:54.346 "get_zone_info": false, 00:09:54.346 "zone_management": false, 00:09:54.346 "zone_append": false, 00:09:54.346 "compare": false, 00:09:54.346 "compare_and_write": false, 00:09:54.346 "abort": true, 00:09:54.346 "seek_hole": false, 00:09:54.346 "seek_data": false, 00:09:54.346 "copy": true, 00:09:54.346 "nvme_iov_md": false 00:09:54.346 }, 00:09:54.346 "memory_domains": [ 00:09:54.346 { 00:09:54.346 "dma_device_id": "system", 00:09:54.346 "dma_device_type": 1 00:09:54.346 }, 00:09:54.346 { 00:09:54.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.346 "dma_device_type": 2 00:09:54.346 } 00:09:54.346 ], 00:09:54.346 "driver_specific": {} 00:09:54.346 }, 00:09:54.346 { 00:09:54.346 "name": "Passthru0", 00:09:54.346 "aliases": [ 00:09:54.346 "b1b7545f-150c-5b35-a72a-54454b7772a9" 00:09:54.346 ], 00:09:54.346 "product_name": "passthru", 00:09:54.346 "block_size": 512, 00:09:54.346 "num_blocks": 16384, 00:09:54.346 "uuid": "b1b7545f-150c-5b35-a72a-54454b7772a9", 00:09:54.346 "assigned_rate_limits": { 00:09:54.346 "rw_ios_per_sec": 0, 00:09:54.346 "rw_mbytes_per_sec": 0, 00:09:54.346 "r_mbytes_per_sec": 0, 00:09:54.346 "w_mbytes_per_sec": 0 00:09:54.346 }, 00:09:54.346 "claimed": false, 00:09:54.346 "zoned": false, 00:09:54.346 "supported_io_types": { 00:09:54.346 "read": true, 00:09:54.346 "write": true, 00:09:54.346 "unmap": true, 00:09:54.346 "flush": true, 00:09:54.346 "reset": true, 00:09:54.346 "nvme_admin": false, 00:09:54.346 "nvme_io": false, 00:09:54.346 "nvme_io_md": false, 00:09:54.346 "write_zeroes": true, 00:09:54.346 "zcopy": true, 00:09:54.346 "get_zone_info": false, 00:09:54.346 "zone_management": false, 00:09:54.346 "zone_append": false, 00:09:54.346 "compare": false, 00:09:54.346 "compare_and_write": false, 00:09:54.346 "abort": true, 00:09:54.346 "seek_hole": false, 00:09:54.346 "seek_data": false, 00:09:54.346 "copy": true, 00:09:54.346 "nvme_iov_md": false 00:09:54.346 }, 00:09:54.346 "memory_domains": [ 00:09:54.346 { 00:09:54.346 "dma_device_id": "system", 00:09:54.346 "dma_device_type": 1 00:09:54.346 }, 00:09:54.346 { 00:09:54.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.346 "dma_device_type": 2 00:09:54.346 } 00:09:54.346 ], 00:09:54.346 "driver_specific": { 00:09:54.346 "passthru": { 00:09:54.346 "name": "Passthru0", 00:09:54.346 "base_bdev_name": "Malloc0" 00:09:54.346 } 00:09:54.346 } 00:09:54.346 } 00:09:54.346 ]' 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:54.346 13:52:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:54.346 00:09:54.346 real 0m0.302s 00:09:54.346 user 0m0.183s 00:09:54.346 sys 0m0.022s 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 ************************************ 00:09:54.346 END TEST rpc_integrity 00:09:54.346 ************************************ 00:09:54.346 13:52:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:54.346 13:52:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.346 13:52:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.346 13:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 ************************************ 00:09:54.346 START TEST rpc_plugins 00:09:54.346 ************************************ 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:54.346 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:54.346 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:54.346 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.346 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:54.346 { 00:09:54.346 "name": "Malloc1", 00:09:54.346 "aliases": [ 00:09:54.346 "60181aba-37e8-49f1-accb-6afee2677f00" 00:09:54.346 ], 00:09:54.346 "product_name": "Malloc disk", 00:09:54.346 "block_size": 4096, 00:09:54.346 "num_blocks": 256, 00:09:54.346 "uuid": "60181aba-37e8-49f1-accb-6afee2677f00", 00:09:54.346 "assigned_rate_limits": { 00:09:54.346 "rw_ios_per_sec": 0, 00:09:54.346 "rw_mbytes_per_sec": 0, 00:09:54.346 "r_mbytes_per_sec": 0, 00:09:54.346 "w_mbytes_per_sec": 0 00:09:54.346 }, 00:09:54.346 "claimed": false, 00:09:54.346 "zoned": false, 00:09:54.346 "supported_io_types": { 00:09:54.346 "read": true, 00:09:54.346 "write": true, 00:09:54.346 "unmap": true, 00:09:54.346 "flush": true, 00:09:54.346 "reset": true, 00:09:54.346 "nvme_admin": false, 00:09:54.346 "nvme_io": false, 00:09:54.346 "nvme_io_md": false, 00:09:54.346 "write_zeroes": true, 00:09:54.346 "zcopy": true, 00:09:54.346 "get_zone_info": false, 00:09:54.346 "zone_management": false, 00:09:54.346 "zone_append": false, 00:09:54.346 "compare": false, 00:09:54.346 "compare_and_write": false, 00:09:54.346 "abort": true, 00:09:54.346 "seek_hole": false, 00:09:54.346 "seek_data": false, 00:09:54.346 "copy": true, 00:09:54.346 "nvme_iov_md": false 00:09:54.346 }, 00:09:54.346 "memory_domains": [ 00:09:54.346 { 00:09:54.346 "dma_device_id": "system", 00:09:54.346 "dma_device_type": 1 00:09:54.346 }, 00:09:54.346 { 00:09:54.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.346 "dma_device_type": 2 00:09:54.346 } 00:09:54.346 ], 00:09:54.346 "driver_specific": {} 00:09:54.346 } 00:09:54.346 ]' 00:09:54.347 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:54.606 13:52:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:54.606 00:09:54.606 real 0m0.158s 00:09:54.606 user 0m0.105s 00:09:54.606 sys 0m0.013s 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.606 13:52:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:54.606 ************************************ 00:09:54.606 END TEST rpc_plugins 00:09:54.606 ************************************ 00:09:54.606 13:52:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:54.606 13:52:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.606 13:52:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.606 13:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.606 ************************************ 00:09:54.606 START TEST rpc_trace_cmd_test 00:09:54.606 ************************************ 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:54.606 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid112164", 00:09:54.606 "tpoint_group_mask": "0x8", 00:09:54.606 "iscsi_conn": { 00:09:54.606 "mask": "0x2", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "scsi": { 00:09:54.606 "mask": "0x4", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "bdev": { 00:09:54.606 "mask": "0x8", 00:09:54.606 "tpoint_mask": "0xffffffffffffffff" 00:09:54.606 }, 00:09:54.606 "nvmf_rdma": { 00:09:54.606 "mask": "0x10", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "nvmf_tcp": { 00:09:54.606 "mask": "0x20", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "ftl": { 00:09:54.606 "mask": "0x40", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "blobfs": { 00:09:54.606 "mask": "0x80", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "dsa": { 00:09:54.606 "mask": "0x200", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "thread": { 00:09:54.606 "mask": "0x400", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "nvme_pcie": { 00:09:54.606 "mask": "0x800", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "iaa": { 00:09:54.606 "mask": "0x1000", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "nvme_tcp": { 00:09:54.606 "mask": "0x2000", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "bdev_nvme": { 00:09:54.606 "mask": "0x4000", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 }, 00:09:54.606 "sock": { 00:09:54.606 "mask": "0x8000", 00:09:54.606 "tpoint_mask": "0x0" 00:09:54.606 } 00:09:54.606 }' 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:54.606 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:54.607 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:54.607 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:54.607 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:54.867 00:09:54.867 real 0m0.279s 00:09:54.867 user 0m0.248s 00:09:54.867 sys 0m0.023s 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.867 13:52:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.867 ************************************ 00:09:54.867 END TEST rpc_trace_cmd_test 00:09:54.867 ************************************ 00:09:54.867 13:52:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:54.867 13:52:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:54.867 13:52:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:54.867 13:52:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.867 13:52:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.867 13:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.867 ************************************ 00:09:54.867 START TEST rpc_daemon_integrity 00:09:54.867 ************************************ 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.867 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.124 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:55.124 { 00:09:55.124 "name": "Malloc2", 00:09:55.124 "aliases": [ 00:09:55.124 "61bca45d-2418-4edf-a03a-064a832207f8" 00:09:55.124 ], 00:09:55.124 "product_name": "Malloc disk", 00:09:55.124 "block_size": 512, 00:09:55.124 "num_blocks": 16384, 00:09:55.125 "uuid": "61bca45d-2418-4edf-a03a-064a832207f8", 00:09:55.125 "assigned_rate_limits": { 00:09:55.125 "rw_ios_per_sec": 0, 00:09:55.125 "rw_mbytes_per_sec": 0, 00:09:55.125 "r_mbytes_per_sec": 0, 00:09:55.125 "w_mbytes_per_sec": 0 00:09:55.125 }, 00:09:55.125 "claimed": false, 00:09:55.125 "zoned": false, 00:09:55.125 "supported_io_types": { 00:09:55.125 "read": true, 00:09:55.125 "write": true, 00:09:55.125 "unmap": true, 00:09:55.125 "flush": true, 00:09:55.125 "reset": true, 00:09:55.125 "nvme_admin": false, 00:09:55.125 "nvme_io": false, 00:09:55.125 "nvme_io_md": false, 00:09:55.125 "write_zeroes": true, 00:09:55.125 "zcopy": true, 00:09:55.125 "get_zone_info": false, 00:09:55.125 "zone_management": false, 00:09:55.125 "zone_append": false, 00:09:55.125 "compare": false, 00:09:55.125 "compare_and_write": false, 00:09:55.125 "abort": true, 00:09:55.125 "seek_hole": false, 00:09:55.125 "seek_data": false, 00:09:55.125 "copy": true, 00:09:55.125 "nvme_iov_md": false 00:09:55.125 }, 00:09:55.125 "memory_domains": [ 00:09:55.125 { 00:09:55.125 "dma_device_id": "system", 00:09:55.125 "dma_device_type": 1 00:09:55.125 }, 00:09:55.125 { 00:09:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.125 "dma_device_type": 2 00:09:55.125 } 00:09:55.125 ], 00:09:55.125 "driver_specific": {} 00:09:55.125 } 00:09:55.125 ]' 00:09:55.125 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:55.125 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:55.125 13:52:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:55.125 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.125 13:52:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.125 [2024-07-25 13:52:44.007007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:55.125 [2024-07-25 13:52:44.007096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.125 [2024-07-25 13:52:44.007155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.125 [2024-07-25 13:52:44.007184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.125 [2024-07-25 13:52:44.009866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.125 [2024-07-25 13:52:44.009941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:55.125 Passthru0 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:55.125 { 00:09:55.125 "name": "Malloc2", 00:09:55.125 "aliases": [ 00:09:55.125 "61bca45d-2418-4edf-a03a-064a832207f8" 00:09:55.125 ], 00:09:55.125 "product_name": "Malloc disk", 00:09:55.125 "block_size": 512, 00:09:55.125 "num_blocks": 16384, 00:09:55.125 "uuid": "61bca45d-2418-4edf-a03a-064a832207f8", 00:09:55.125 "assigned_rate_limits": { 00:09:55.125 "rw_ios_per_sec": 0, 00:09:55.125 "rw_mbytes_per_sec": 0, 00:09:55.125 "r_mbytes_per_sec": 0, 00:09:55.125 "w_mbytes_per_sec": 0 00:09:55.125 }, 00:09:55.125 "claimed": true, 00:09:55.125 "claim_type": "exclusive_write", 00:09:55.125 "zoned": false, 00:09:55.125 "supported_io_types": { 00:09:55.125 "read": true, 00:09:55.125 "write": true, 00:09:55.125 "unmap": true, 00:09:55.125 "flush": true, 00:09:55.125 "reset": true, 00:09:55.125 "nvme_admin": false, 00:09:55.125 "nvme_io": false, 00:09:55.125 "nvme_io_md": false, 00:09:55.125 "write_zeroes": true, 00:09:55.125 "zcopy": true, 00:09:55.125 "get_zone_info": false, 00:09:55.125 "zone_management": false, 00:09:55.125 "zone_append": false, 00:09:55.125 "compare": false, 00:09:55.125 "compare_and_write": false, 00:09:55.125 "abort": true, 00:09:55.125 "seek_hole": false, 00:09:55.125 "seek_data": false, 00:09:55.125 "copy": true, 00:09:55.125 "nvme_iov_md": false 00:09:55.125 }, 00:09:55.125 "memory_domains": [ 00:09:55.125 { 00:09:55.125 "dma_device_id": "system", 00:09:55.125 "dma_device_type": 1 00:09:55.125 }, 00:09:55.125 { 00:09:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.125 "dma_device_type": 2 00:09:55.125 } 00:09:55.125 ], 00:09:55.125 "driver_specific": {} 00:09:55.125 }, 00:09:55.125 { 00:09:55.125 "name": "Passthru0", 00:09:55.125 "aliases": [ 00:09:55.125 "7fd1e971-729c-5e76-ae29-485747108bca" 00:09:55.125 ], 00:09:55.125 "product_name": "passthru", 00:09:55.125 "block_size": 512, 00:09:55.125 "num_blocks": 16384, 00:09:55.125 "uuid": "7fd1e971-729c-5e76-ae29-485747108bca", 00:09:55.125 "assigned_rate_limits": { 00:09:55.125 "rw_ios_per_sec": 0, 00:09:55.125 "rw_mbytes_per_sec": 0, 00:09:55.125 "r_mbytes_per_sec": 0, 00:09:55.125 "w_mbytes_per_sec": 0 00:09:55.125 }, 00:09:55.125 "claimed": false, 00:09:55.125 "zoned": false, 00:09:55.125 "supported_io_types": { 00:09:55.125 "read": true, 00:09:55.125 "write": true, 00:09:55.125 "unmap": true, 00:09:55.125 "flush": true, 00:09:55.125 "reset": true, 00:09:55.125 "nvme_admin": false, 00:09:55.125 "nvme_io": false, 00:09:55.125 "nvme_io_md": false, 00:09:55.125 "write_zeroes": true, 00:09:55.125 "zcopy": true, 00:09:55.125 "get_zone_info": false, 00:09:55.125 "zone_management": false, 00:09:55.125 "zone_append": false, 00:09:55.125 "compare": false, 00:09:55.125 "compare_and_write": false, 00:09:55.125 "abort": true, 00:09:55.125 "seek_hole": false, 00:09:55.125 "seek_data": false, 00:09:55.125 "copy": true, 00:09:55.125 "nvme_iov_md": false 00:09:55.125 }, 00:09:55.125 "memory_domains": [ 00:09:55.125 { 00:09:55.125 "dma_device_id": "system", 00:09:55.125 "dma_device_type": 1 00:09:55.125 }, 00:09:55.125 { 00:09:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.125 "dma_device_type": 2 00:09:55.125 } 00:09:55.125 ], 00:09:55.125 "driver_specific": { 00:09:55.125 "passthru": { 00:09:55.125 "name": "Passthru0", 00:09:55.125 "base_bdev_name": "Malloc2" 00:09:55.125 } 00:09:55.125 } 00:09:55.125 } 00:09:55.125 ]' 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:55.125 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:55.383 13:52:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:55.383 00:09:55.383 real 0m0.336s 00:09:55.383 user 0m0.209s 00:09:55.383 sys 0m0.031s 00:09:55.383 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.383 13:52:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:55.383 ************************************ 00:09:55.383 END TEST rpc_daemon_integrity 00:09:55.383 ************************************ 00:09:55.383 13:52:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:55.383 13:52:44 rpc -- rpc/rpc.sh@84 -- # killprocess 112164 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@950 -- # '[' -z 112164 ']' 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@954 -- # kill -0 112164 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@955 -- # uname 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112164 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112164' 00:09:55.383 killing process with pid 112164 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@969 -- # kill 112164 00:09:55.383 13:52:44 rpc -- common/autotest_common.sh@974 -- # wait 112164 00:09:57.908 00:09:57.908 real 0m4.902s 00:09:57.908 user 0m5.610s 00:09:57.908 sys 0m0.777s 00:09:57.908 13:52:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.908 13:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.908 ************************************ 00:09:57.908 END TEST rpc 00:09:57.908 ************************************ 00:09:57.908 13:52:46 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:57.908 13:52:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.908 13:52:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.908 13:52:46 -- common/autotest_common.sh@10 -- # set +x 00:09:57.908 ************************************ 00:09:57.908 START TEST skip_rpc 00:09:57.908 ************************************ 00:09:57.908 13:52:46 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:57.908 * Looking for test storage... 00:09:57.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:57.908 13:52:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:57.908 13:52:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:57.908 13:52:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:57.908 13:52:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.908 13:52:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.908 13:52:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.908 ************************************ 00:09:57.908 START TEST skip_rpc 00:09:57.908 ************************************ 00:09:57.908 13:52:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:57.908 13:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=112409 00:09:57.908 13:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.908 13:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:57.908 13:52:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:57.908 [2024-07-25 13:52:46.688498] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:09:57.908 [2024-07-25 13:52:46.688778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112409 ] 00:09:57.908 [2024-07-25 13:52:46.851578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.166 [2024-07-25 13:52:47.079304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 112409 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 112409 ']' 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 112409 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112409 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112409' 00:10:03.432 killing process with pid 112409 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 112409 00:10:03.432 13:52:51 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 112409 00:10:04.808 00:10:04.808 real 0m7.214s 00:10:04.808 user 0m6.719s 00:10:04.808 sys 0m0.410s 00:10:04.808 13:52:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.808 ************************************ 00:10:04.808 END TEST skip_rpc 00:10:04.808 ************************************ 00:10:04.808 13:52:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 13:52:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:05.066 13:52:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.066 13:52:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.066 13:52:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 ************************************ 00:10:05.066 START TEST skip_rpc_with_json 00:10:05.066 ************************************ 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112528 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112528 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 112528 ']' 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.066 13:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 [2024-07-25 13:52:53.953319] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:05.066 [2024-07-25 13:52:53.953715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112528 ] 00:10:05.324 [2024-07-25 13:52:54.115698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.324 [2024-07-25 13:52:54.331687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:06.260 [2024-07-25 13:52:55.120826] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:06.260 request: 00:10:06.260 { 00:10:06.260 "trtype": "tcp", 00:10:06.260 "method": "nvmf_get_transports", 00:10:06.260 "req_id": 1 00:10:06.260 } 00:10:06.260 Got JSON-RPC error response 00:10:06.260 response: 00:10:06.260 { 00:10:06.260 "code": -19, 00:10:06.260 "message": "No such device" 00:10:06.260 } 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:06.260 [2024-07-25 13:52:55.128942] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.260 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:06.260 { 00:10:06.260 "subsystems": [ 00:10:06.260 { 00:10:06.260 "subsystem": "scheduler", 00:10:06.260 "config": [ 00:10:06.260 { 00:10:06.260 "method": "framework_set_scheduler", 00:10:06.260 "params": { 00:10:06.260 "name": "static" 00:10:06.260 } 00:10:06.260 } 00:10:06.260 ] 00:10:06.260 }, 00:10:06.260 { 00:10:06.260 "subsystem": "vmd", 00:10:06.260 "config": [] 00:10:06.260 }, 00:10:06.260 { 00:10:06.260 "subsystem": "sock", 00:10:06.260 "config": [ 00:10:06.260 { 00:10:06.260 "method": "sock_set_default_impl", 00:10:06.260 "params": { 00:10:06.260 "impl_name": "posix" 00:10:06.260 } 00:10:06.260 }, 00:10:06.260 { 00:10:06.260 "method": "sock_impl_set_options", 00:10:06.260 "params": { 00:10:06.260 "impl_name": "ssl", 00:10:06.260 "recv_buf_size": 4096, 00:10:06.260 "send_buf_size": 4096, 00:10:06.260 "enable_recv_pipe": true, 00:10:06.260 "enable_quickack": false, 00:10:06.260 "enable_placement_id": 0, 00:10:06.260 "enable_zerocopy_send_server": true, 00:10:06.260 "enable_zerocopy_send_client": false, 00:10:06.260 "zerocopy_threshold": 0, 00:10:06.260 "tls_version": 0, 00:10:06.260 "enable_ktls": false 00:10:06.260 } 00:10:06.260 }, 00:10:06.260 { 00:10:06.260 "method": "sock_impl_set_options", 00:10:06.260 "params": { 00:10:06.260 "impl_name": "posix", 00:10:06.260 "recv_buf_size": 2097152, 00:10:06.260 "send_buf_size": 2097152, 00:10:06.260 "enable_recv_pipe": true, 00:10:06.260 "enable_quickack": false, 00:10:06.260 "enable_placement_id": 0, 00:10:06.260 "enable_zerocopy_send_server": true, 00:10:06.260 "enable_zerocopy_send_client": false, 00:10:06.260 "zerocopy_threshold": 0, 00:10:06.260 "tls_version": 0, 00:10:06.260 "enable_ktls": false 00:10:06.260 } 00:10:06.260 } 00:10:06.260 ] 00:10:06.260 }, 00:10:06.260 { 00:10:06.260 "subsystem": "iobuf", 00:10:06.260 "config": [ 00:10:06.260 { 00:10:06.260 "method": "iobuf_set_options", 00:10:06.260 "params": { 00:10:06.260 "small_pool_count": 8192, 00:10:06.260 "large_pool_count": 1024, 00:10:06.260 "small_bufsize": 8192, 00:10:06.260 "large_bufsize": 135168 00:10:06.260 } 00:10:06.260 } 00:10:06.260 ] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "keyring", 00:10:06.261 "config": [] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "accel", 00:10:06.261 "config": [ 00:10:06.261 { 00:10:06.261 "method": "accel_set_options", 00:10:06.261 "params": { 00:10:06.261 "small_cache_size": 128, 00:10:06.261 "large_cache_size": 16, 00:10:06.261 "task_count": 2048, 00:10:06.261 "sequence_count": 2048, 00:10:06.261 "buf_count": 2048 00:10:06.261 } 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "bdev", 00:10:06.261 "config": [ 00:10:06.261 { 00:10:06.261 "method": "bdev_set_options", 00:10:06.261 "params": { 00:10:06.261 "bdev_io_pool_size": 65535, 00:10:06.261 "bdev_io_cache_size": 256, 00:10:06.261 "bdev_auto_examine": true, 00:10:06.261 "iobuf_small_cache_size": 128, 00:10:06.261 "iobuf_large_cache_size": 16 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "bdev_raid_set_options", 00:10:06.261 "params": { 00:10:06.261 "process_window_size_kb": 1024, 00:10:06.261 "process_max_bandwidth_mb_sec": 0 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "bdev_nvme_set_options", 00:10:06.261 "params": { 00:10:06.261 "action_on_timeout": "none", 00:10:06.261 "timeout_us": 0, 00:10:06.261 "timeout_admin_us": 0, 00:10:06.261 "keep_alive_timeout_ms": 10000, 00:10:06.261 "arbitration_burst": 0, 00:10:06.261 "low_priority_weight": 0, 00:10:06.261 "medium_priority_weight": 0, 00:10:06.261 "high_priority_weight": 0, 00:10:06.261 "nvme_adminq_poll_period_us": 10000, 00:10:06.261 "nvme_ioq_poll_period_us": 0, 00:10:06.261 "io_queue_requests": 0, 00:10:06.261 "delay_cmd_submit": true, 00:10:06.261 "transport_retry_count": 4, 00:10:06.261 "bdev_retry_count": 3, 00:10:06.261 "transport_ack_timeout": 0, 00:10:06.261 "ctrlr_loss_timeout_sec": 0, 00:10:06.261 "reconnect_delay_sec": 0, 00:10:06.261 "fast_io_fail_timeout_sec": 0, 00:10:06.261 "disable_auto_failback": false, 00:10:06.261 "generate_uuids": false, 00:10:06.261 "transport_tos": 0, 00:10:06.261 "nvme_error_stat": false, 00:10:06.261 "rdma_srq_size": 0, 00:10:06.261 "io_path_stat": false, 00:10:06.261 "allow_accel_sequence": false, 00:10:06.261 "rdma_max_cq_size": 0, 00:10:06.261 "rdma_cm_event_timeout_ms": 0, 00:10:06.261 "dhchap_digests": [ 00:10:06.261 "sha256", 00:10:06.261 "sha384", 00:10:06.261 "sha512" 00:10:06.261 ], 00:10:06.261 "dhchap_dhgroups": [ 00:10:06.261 "null", 00:10:06.261 "ffdhe2048", 00:10:06.261 "ffdhe3072", 00:10:06.261 "ffdhe4096", 00:10:06.261 "ffdhe6144", 00:10:06.261 "ffdhe8192" 00:10:06.261 ] 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "bdev_nvme_set_hotplug", 00:10:06.261 "params": { 00:10:06.261 "period_us": 100000, 00:10:06.261 "enable": false 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "bdev_iscsi_set_options", 00:10:06.261 "params": { 00:10:06.261 "timeout_sec": 30 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "bdev_wait_for_examine" 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "nvmf", 00:10:06.261 "config": [ 00:10:06.261 { 00:10:06.261 "method": "nvmf_set_config", 00:10:06.261 "params": { 00:10:06.261 "discovery_filter": "match_any", 00:10:06.261 "admin_cmd_passthru": { 00:10:06.261 "identify_ctrlr": false 00:10:06.261 } 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "nvmf_set_max_subsystems", 00:10:06.261 "params": { 00:10:06.261 "max_subsystems": 1024 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "nvmf_set_crdt", 00:10:06.261 "params": { 00:10:06.261 "crdt1": 0, 00:10:06.261 "crdt2": 0, 00:10:06.261 "crdt3": 0 00:10:06.261 } 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "method": "nvmf_create_transport", 00:10:06.261 "params": { 00:10:06.261 "trtype": "TCP", 00:10:06.261 "max_queue_depth": 128, 00:10:06.261 "max_io_qpairs_per_ctrlr": 127, 00:10:06.261 "in_capsule_data_size": 4096, 00:10:06.261 "max_io_size": 131072, 00:10:06.261 "io_unit_size": 131072, 00:10:06.261 "max_aq_depth": 128, 00:10:06.261 "num_shared_buffers": 511, 00:10:06.261 "buf_cache_size": 4294967295, 00:10:06.261 "dif_insert_or_strip": false, 00:10:06.261 "zcopy": false, 00:10:06.261 "c2h_success": true, 00:10:06.261 "sock_priority": 0, 00:10:06.261 "abort_timeout_sec": 1, 00:10:06.261 "ack_timeout": 0, 00:10:06.261 "data_wr_pool_size": 0 00:10:06.261 } 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "nbd", 00:10:06.261 "config": [] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "vhost_blk", 00:10:06.261 "config": [] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "scsi", 00:10:06.261 "config": null 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "iscsi", 00:10:06.261 "config": [ 00:10:06.261 { 00:10:06.261 "method": "iscsi_set_options", 00:10:06.261 "params": { 00:10:06.261 "node_base": "iqn.2016-06.io.spdk", 00:10:06.261 "max_sessions": 128, 00:10:06.261 "max_connections_per_session": 2, 00:10:06.261 "max_queue_depth": 64, 00:10:06.261 "default_time2wait": 2, 00:10:06.261 "default_time2retain": 20, 00:10:06.261 "first_burst_length": 8192, 00:10:06.261 "immediate_data": true, 00:10:06.261 "allow_duplicated_isid": false, 00:10:06.261 "error_recovery_level": 0, 00:10:06.261 "nop_timeout": 60, 00:10:06.261 "nop_in_interval": 30, 00:10:06.261 "disable_chap": false, 00:10:06.261 "require_chap": false, 00:10:06.261 "mutual_chap": false, 00:10:06.261 "chap_group": 0, 00:10:06.261 "max_large_datain_per_connection": 64, 00:10:06.261 "max_r2t_per_connection": 4, 00:10:06.261 "pdu_pool_size": 36864, 00:10:06.261 "immediate_data_pool_size": 16384, 00:10:06.261 "data_out_pool_size": 2048 00:10:06.261 } 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "subsystem": "vhost_scsi", 00:10:06.261 "config": [] 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 } 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112528 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112528 ']' 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112528 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112528 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.261 killing process with pid 112528 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112528' 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112528 00:10:06.261 13:52:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112528 00:10:08.841 13:52:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112587 00:10:08.841 13:52:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:08.841 13:52:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112587 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112587 ']' 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112587 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112587 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.108 killing process with pid 112587 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112587' 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112587 00:10:14.108 13:53:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112587 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:16.008 00:10:16.008 real 0m10.790s 00:10:16.008 user 0m10.277s 00:10:16.008 sys 0m0.913s 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:16.008 ************************************ 00:10:16.008 END TEST skip_rpc_with_json 00:10:16.008 ************************************ 00:10:16.008 13:53:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.008 ************************************ 00:10:16.008 START TEST skip_rpc_with_delay 00:10:16.008 ************************************ 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:16.008 [2024-07-25 13:53:04.803662] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:16.008 [2024-07-25 13:53:04.803886] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:16.008 00:10:16.008 real 0m0.147s 00:10:16.008 user 0m0.090s 00:10:16.008 sys 0m0.057s 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.008 13:53:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:16.008 ************************************ 00:10:16.008 END TEST skip_rpc_with_delay 00:10:16.008 ************************************ 00:10:16.008 13:53:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:16.008 13:53:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:16.008 13:53:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.008 13:53:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.008 ************************************ 00:10:16.008 START TEST exit_on_failed_rpc_init 00:10:16.008 ************************************ 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=112728 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 112728 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 112728 ']' 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.008 13:53:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:16.009 [2024-07-25 13:53:05.008677] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:16.009 [2024-07-25 13:53:05.009382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112728 ] 00:10:16.267 [2024-07-25 13:53:05.182596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.525 [2024-07-25 13:53:05.433363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:17.489 13:53:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:17.489 [2024-07-25 13:53:06.343996] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:17.489 [2024-07-25 13:53:06.344399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112751 ] 00:10:17.489 [2024-07-25 13:53:06.507353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.747 [2024-07-25 13:53:06.763204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.747 [2024-07-25 13:53:06.763653] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:17.747 [2024-07-25 13:53:06.763929] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:17.747 [2024-07-25 13:53:06.764172] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 112728 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 112728 ']' 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 112728 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112728 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112728' 00:10:18.314 killing process with pid 112728 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 112728 00:10:18.314 13:53:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 112728 00:10:20.842 ************************************ 00:10:20.843 END TEST exit_on_failed_rpc_init 00:10:20.843 ************************************ 00:10:20.843 00:10:20.843 real 0m4.473s 00:10:20.843 user 0m5.196s 00:10:20.843 sys 0m0.633s 00:10:20.843 13:53:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.843 13:53:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 13:53:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:20.843 ************************************ 00:10:20.843 END TEST skip_rpc 00:10:20.843 ************************************ 00:10:20.843 00:10:20.843 real 0m22.908s 00:10:20.843 user 0m22.458s 00:10:20.843 sys 0m2.121s 00:10:20.843 13:53:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.843 13:53:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 13:53:09 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:20.843 13:53:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:20.843 13:53:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.843 13:53:09 -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 ************************************ 00:10:20.843 START TEST rpc_client 00:10:20.843 ************************************ 00:10:20.843 13:53:09 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:20.843 * Looking for test storage... 00:10:20.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:20.843 13:53:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:20.843 OK 00:10:20.843 13:53:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:20.843 00:10:20.843 real 0m0.149s 00:10:20.843 user 0m0.101s 00:10:20.843 sys 0m0.059s 00:10:20.843 13:53:09 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.843 ************************************ 00:10:20.843 END TEST rpc_client 00:10:20.843 ************************************ 00:10:20.843 13:53:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 13:53:09 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:20.843 13:53:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:20.843 13:53:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.843 13:53:09 -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 ************************************ 00:10:20.843 START TEST json_config 00:10:20.843 ************************************ 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:533a4aa8-7274-447f-a33e-5658d95fe7ba 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=533a4aa8-7274-447f-a33e-5658d95fe7ba 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.843 13:53:09 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.843 13:53:09 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.843 13:53:09 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.843 13:53:09 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:20.843 13:53:09 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:20.843 13:53:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:20.843 13:53:09 json_config -- paths/export.sh@5 -- # export PATH 00:10:20.843 13:53:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@47 -- # : 0 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.843 13:53:09 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:20.843 INFO: JSON configuration test init 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.843 13:53:09 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:10:20.843 13:53:09 json_config -- json_config/common.sh@9 -- # local app=target 00:10:20.843 13:53:09 json_config -- json_config/common.sh@10 -- # shift 00:10:20.843 13:53:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:20.843 13:53:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:20.843 13:53:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:20.843 13:53:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.843 13:53:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:20.843 13:53:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112915 00:10:20.843 Waiting for target to run... 00:10:20.843 13:53:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:20.843 13:53:09 json_config -- json_config/common.sh@25 -- # waitforlisten 112915 /var/tmp/spdk_tgt.sock 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@831 -- # '[' -z 112915 ']' 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.843 13:53:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:20.844 13:53:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:20.844 [2024-07-25 13:53:09.829748] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:20.844 [2024-07-25 13:53:09.829981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112915 ] 00:10:21.409 [2024-07-25 13:53:10.280294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.666 [2024-07-25 13:53:10.507111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:10:21.924 00:10:21.924 13:53:10 json_config -- json_config/common.sh@26 -- # echo '' 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.924 13:53:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:21.924 13:53:10 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:10:21.924 13:53:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:22.857 13:53:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.857 13:53:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:22.857 13:53:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:22.857 13:53:11 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@51 -- # sort 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:10:23.115 13:53:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.115 13:53:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@59 -- # return 0 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:10:23.115 13:53:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.115 13:53:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:23.115 13:53:12 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:23.115 13:53:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:23.682 13:53:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:23.682 Nvme0n1p0 Nvme0n1p1 00:10:23.682 13:53:12 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:23.682 13:53:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:24.249 [2024-07-25 13:53:13.029984] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:24.249 [2024-07-25 13:53:13.030202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:24.249 00:10:24.249 13:53:13 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:24.249 13:53:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:24.510 Malloc3 00:10:24.510 13:53:13 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:24.510 13:53:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:24.770 [2024-07-25 13:53:13.594963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:24.770 [2024-07-25 13:53:13.595142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.770 [2024-07-25 13:53:13.595204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:10:24.770 [2024-07-25 13:53:13.595236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.770 [2024-07-25 13:53:13.597928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.770 [2024-07-25 13:53:13.598007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:24.770 PTBdevFromMalloc3 00:10:24.770 13:53:13 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:24.770 13:53:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:25.029 Null0 00:10:25.029 13:53:13 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:25.029 13:53:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:25.287 Malloc0 00:10:25.287 13:53:14 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:25.287 13:53:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:25.544 Malloc1 00:10:25.544 13:53:14 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:25.544 13:53:14 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:26.108 102400+0 records in 00:10:26.108 102400+0 records out 00:10:26.108 104857600 bytes (105 MB, 100 MiB) copied, 0.375022 s, 280 MB/s 00:10:26.108 13:53:14 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:26.108 13:53:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:26.365 aio_disk 00:10:26.365 13:53:15 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:26.365 13:53:15 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:26.365 13:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:26.622 855ffa82-59f7-4b74-bb74-d2568c1aaecd 00:10:26.622 13:53:15 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:26.622 13:53:15 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:26.622 13:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:26.880 13:53:15 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:26.880 13:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:27.138 13:53:16 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:27.138 13:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:27.396 13:53:16 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:27.396 13:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:27.653 13:53:16 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da bdev_register:caede36b-e415-4dc0-83bb-78150181c802 bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da bdev_register:caede36b-e415-4dc0-83bb-78150181c802 bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@75 -- # sort 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@76 -- # sort 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:10:27.654 13:53:16 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:27.654 13:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.912 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:caede36b-e415-4dc0-83bb-78150181c802 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:caede36b-e415-4dc0-83bb-78150181c802 bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\c\e\e\2\7\0\a\-\1\a\b\0\-\4\a\7\c\-\b\7\d\a\-\9\3\c\5\0\b\c\7\c\6\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\a\b\9\7\0\1\2\-\5\c\8\7\-\4\7\f\2\-\8\7\7\1\-\b\d\2\e\2\3\b\d\1\d\1\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\a\e\d\e\3\6\b\-\e\4\1\5\-\4\d\c\0\-\8\3\b\b\-\7\8\1\5\0\1\8\1\c\8\0\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\9\1\b\1\3\d\5\-\e\4\0\4\-\4\8\e\e\-\a\b\d\b\-\4\2\7\e\c\8\a\c\0\f\9\5 ]] 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@90 -- # cat 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:caede36b-e415-4dc0-83bb-78150181c802 bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 00:10:27.913 Expected events matched: 00:10:27.913 bdev_register:6cee270a-1ab0-4a7c-b7da-93c50bc7c6da 00:10:27.913 bdev_register:9ab97012-5c87-47f2-8771-bd2e23bd1d18 00:10:27.913 bdev_register:Malloc0 00:10:27.913 bdev_register:Malloc0p0 00:10:27.913 bdev_register:Malloc0p1 00:10:27.913 bdev_register:Malloc0p2 00:10:27.913 bdev_register:Malloc1 00:10:27.913 bdev_register:Malloc3 00:10:27.913 bdev_register:Null0 00:10:27.913 bdev_register:Nvme0n1 00:10:27.913 bdev_register:Nvme0n1p0 00:10:27.913 bdev_register:Nvme0n1p1 00:10:27.913 bdev_register:PTBdevFromMalloc3 00:10:27.913 bdev_register:aio_disk 00:10:27.913 bdev_register:caede36b-e415-4dc0-83bb-78150181c802 00:10:27.913 bdev_register:d91b13d5-e404-48ee-abdb-427ec8ac0f95 00:10:27.913 13:53:16 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:10:27.913 13:53:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:27.913 13:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:10:28.172 13:53:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.172 13:53:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:10:28.172 13:53:16 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:28.172 13:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:28.430 MallocBdevForConfigChangeCheck 00:10:28.430 13:53:17 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:10:28.430 13:53:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.430 13:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:28.430 13:53:17 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:10:28.430 13:53:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:28.688 INFO: shutting down applications... 00:10:28.688 13:53:17 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:10:28.688 13:53:17 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:10:28.688 13:53:17 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:10:28.688 13:53:17 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:10:28.688 13:53:17 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:28.946 [2024-07-25 13:53:17.839596] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:29.204 Calling clear_vhost_scsi_subsystem 00:10:29.204 Calling clear_iscsi_subsystem 00:10:29.204 Calling clear_vhost_blk_subsystem 00:10:29.204 Calling clear_nbd_subsystem 00:10:29.204 Calling clear_nvmf_subsystem 00:10:29.204 Calling clear_bdev_subsystem 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@347 -- # count=100 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:29.204 13:53:18 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:29.462 13:53:18 json_config -- json_config/json_config.sh@349 -- # break 00:10:29.462 13:53:18 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:10:29.462 13:53:18 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:10:29.462 13:53:18 json_config -- json_config/common.sh@31 -- # local app=target 00:10:29.462 13:53:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:29.462 13:53:18 json_config -- json_config/common.sh@35 -- # [[ -n 112915 ]] 00:10:29.462 13:53:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 112915 00:10:29.462 13:53:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:29.462 13:53:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.462 13:53:18 json_config -- json_config/common.sh@41 -- # kill -0 112915 00:10:29.462 13:53:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:30.026 13:53:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:30.026 13:53:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:30.026 13:53:18 json_config -- json_config/common.sh@41 -- # kill -0 112915 00:10:30.026 13:53:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:30.593 13:53:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:30.593 13:53:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:30.593 13:53:19 json_config -- json_config/common.sh@41 -- # kill -0 112915 00:10:30.593 13:53:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:30.593 13:53:19 json_config -- json_config/common.sh@43 -- # break 00:10:30.593 13:53:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:30.593 13:53:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:30.593 SPDK target shutdown done 00:10:30.593 INFO: relaunching applications... 00:10:30.593 13:53:19 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:10:30.593 13:53:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.593 13:53:19 json_config -- json_config/common.sh@9 -- # local app=target 00:10:30.593 13:53:19 json_config -- json_config/common.sh@10 -- # shift 00:10:30.593 13:53:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:30.593 13:53:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:30.593 13:53:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:30.593 13:53:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.593 13:53:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.593 13:53:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=113188 00:10:30.593 Waiting for target to run... 00:10:30.593 13:53:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:30.593 13:53:19 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.593 13:53:19 json_config -- json_config/common.sh@25 -- # waitforlisten 113188 /var/tmp/spdk_tgt.sock 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 113188 ']' 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.593 13:53:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:30.593 [2024-07-25 13:53:19.502959] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:30.593 [2024-07-25 13:53:19.503709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113188 ] 00:10:31.194 [2024-07-25 13:53:19.975404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.194 [2024-07-25 13:53:20.205819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.129 [2024-07-25 13:53:20.891293] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:32.129 [2024-07-25 13:53:20.891424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:32.129 [2024-07-25 13:53:20.899281] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:32.129 [2024-07-25 13:53:20.899372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:32.129 [2024-07-25 13:53:20.907291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:32.129 [2024-07-25 13:53:20.907372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:32.129 [2024-07-25 13:53:20.907408] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:32.129 [2024-07-25 13:53:21.004049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:32.129 [2024-07-25 13:53:21.004197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.129 [2024-07-25 13:53:21.004234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:32.129 [2024-07-25 13:53:21.004265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.129 [2024-07-25 13:53:21.004842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.129 [2024-07-25 13:53:21.004897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:32.129 13:53:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.129 00:10:32.129 13:53:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:10:32.129 13:53:21 json_config -- json_config/common.sh@26 -- # echo '' 00:10:32.129 13:53:21 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:10:32.129 INFO: Checking if target configuration is the same... 00:10:32.129 13:53:21 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:32.129 13:53:21 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:32.129 13:53:21 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:10:32.129 13:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:32.129 + '[' 2 -ne 2 ']' 00:10:32.129 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:32.129 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:32.129 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:32.129 +++ basename /dev/fd/62 00:10:32.387 ++ mktemp /tmp/62.XXX 00:10:32.387 + tmp_file_1=/tmp/62.dDK 00:10:32.387 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:32.387 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:32.387 + tmp_file_2=/tmp/spdk_tgt_config.json.61R 00:10:32.387 + ret=0 00:10:32.387 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:32.645 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:32.645 + diff -u /tmp/62.dDK /tmp/spdk_tgt_config.json.61R 00:10:32.645 INFO: JSON config files are the same 00:10:32.645 + echo 'INFO: JSON config files are the same' 00:10:32.645 + rm /tmp/62.dDK /tmp/spdk_tgt_config.json.61R 00:10:32.645 + exit 0 00:10:32.645 13:53:21 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:10:32.645 INFO: changing configuration and checking if this can be detected... 00:10:32.645 13:53:21 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:32.645 13:53:21 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:32.645 13:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:32.903 13:53:21 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:32.903 13:53:21 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:10:32.903 13:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:32.903 + '[' 2 -ne 2 ']' 00:10:32.903 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:32.903 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:32.903 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:32.903 +++ basename /dev/fd/62 00:10:32.903 ++ mktemp /tmp/62.XXX 00:10:32.903 + tmp_file_1=/tmp/62.Pjy 00:10:32.903 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:32.903 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:32.903 + tmp_file_2=/tmp/spdk_tgt_config.json.ijY 00:10:32.903 + ret=0 00:10:32.903 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:33.469 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:33.469 + diff -u /tmp/62.Pjy /tmp/spdk_tgt_config.json.ijY 00:10:33.469 + ret=1 00:10:33.469 + echo '=== Start of file: /tmp/62.Pjy ===' 00:10:33.469 + cat /tmp/62.Pjy 00:10:33.469 + echo '=== End of file: /tmp/62.Pjy ===' 00:10:33.469 + echo '' 00:10:33.469 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ijY ===' 00:10:33.469 + cat /tmp/spdk_tgt_config.json.ijY 00:10:33.470 + echo '=== End of file: /tmp/spdk_tgt_config.json.ijY ===' 00:10:33.470 + echo '' 00:10:33.470 + rm /tmp/62.Pjy /tmp/spdk_tgt_config.json.ijY 00:10:33.470 + exit 1 00:10:33.470 INFO: configuration change detected. 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:10:33.470 13:53:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.470 13:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@321 -- # [[ -n 113188 ]] 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:10:33.470 13:53:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.470 13:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:10:33.470 13:53:22 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:33.470 13:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:33.728 13:53:22 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:33.728 13:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:33.986 13:53:22 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:33.986 13:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:34.244 13:53:23 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:34.245 13:53:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@197 -- # uname -s 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:34.503 13:53:23 json_config -- json_config/json_config.sh@327 -- # killprocess 113188 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@950 -- # '[' -z 113188 ']' 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@954 -- # kill -0 113188 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@955 -- # uname 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113188 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.503 killing process with pid 113188 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113188' 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@969 -- # kill 113188 00:10:34.503 13:53:23 json_config -- common/autotest_common.sh@974 -- # wait 113188 00:10:35.882 13:53:24 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:35.882 13:53:24 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:10:35.882 13:53:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.882 13:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:35.882 13:53:24 json_config -- json_config/json_config.sh@332 -- # return 0 00:10:35.882 INFO: Success 00:10:35.882 13:53:24 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:10:35.882 00:10:35.882 real 0m14.893s 00:10:35.882 user 0m21.804s 00:10:35.882 sys 0m2.623s 00:10:35.882 13:53:24 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.882 13:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:35.882 ************************************ 00:10:35.882 END TEST json_config 00:10:35.882 ************************************ 00:10:35.882 13:53:24 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:35.882 13:53:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.882 13:53:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.882 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:35.882 ************************************ 00:10:35.882 START TEST json_config_extra_key 00:10:35.882 ************************************ 00:10:35.882 13:53:24 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:35.882 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b23b8860-da55-4888-acbd-a1144dea731b 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b23b8860-da55-4888-acbd-a1144dea731b 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.882 13:53:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.883 13:53:24 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.883 13:53:24 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.883 13:53:24 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.883 13:53:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:35.883 13:53:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:35.883 13:53:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:35.883 13:53:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:35.883 13:53:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.883 13:53:24 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:35.883 INFO: launching applications... 00:10:35.883 13:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=113381 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:35.883 Waiting for target to run... 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 113381 /var/tmp/spdk_tgt.sock 00:10:35.883 13:53:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 113381 ']' 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.883 13:53:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:35.883 [2024-07-25 13:53:24.766865] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:35.883 [2024-07-25 13:53:24.767090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113381 ] 00:10:36.451 [2024-07-25 13:53:25.242648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.451 [2024-07-25 13:53:25.465247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.387 13:53:26 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.387 13:53:26 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:10:37.387 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:37.387 INFO: shutting down applications... 00:10:37.387 13:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:37.387 13:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 113381 ]] 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 113381 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:37.387 13:53:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.646 13:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.646 13:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.646 13:53:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:37.646 13:53:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:38.213 13:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:38.213 13:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.213 13:53:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:38.213 13:53:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:38.784 13:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:38.784 13:53:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.784 13:53:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:38.784 13:53:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:39.351 13:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:39.351 13:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:39.351 13:53:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:39.351 13:53:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:39.608 13:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:39.608 13:53:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:39.608 13:53:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:39.608 13:53:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113381 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:40.174 SPDK target shutdown done 00:10:40.174 13:53:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:40.174 Success 00:10:40.174 13:53:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:40.174 00:10:40.174 real 0m4.510s 00:10:40.174 user 0m3.936s 00:10:40.174 sys 0m0.581s 00:10:40.174 13:53:29 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.174 13:53:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:40.174 ************************************ 00:10:40.174 END TEST json_config_extra_key 00:10:40.174 ************************************ 00:10:40.174 13:53:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:40.174 13:53:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:40.174 13:53:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.174 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:40.174 ************************************ 00:10:40.174 START TEST alias_rpc 00:10:40.174 ************************************ 00:10:40.174 13:53:29 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:40.432 * Looking for test storage... 00:10:40.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:40.432 13:53:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:40.432 13:53:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=113489 00:10:40.432 13:53:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 113489 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 113489 ']' 00:10:40.432 13:53:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.432 13:53:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.432 [2024-07-25 13:53:29.337303] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:40.432 [2024-07-25 13:53:29.337538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113489 ] 00:10:40.690 [2024-07-25 13:53:29.508183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.949 [2024-07-25 13:53:29.745005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:41.883 13:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:41.883 13:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 113489 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 113489 ']' 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 113489 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.883 13:53:30 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113489 00:10:42.141 13:53:30 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.141 13:53:30 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.141 killing process with pid 113489 00:10:42.141 13:53:30 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113489' 00:10:42.141 13:53:30 alias_rpc -- common/autotest_common.sh@969 -- # kill 113489 00:10:42.141 13:53:30 alias_rpc -- common/autotest_common.sh@974 -- # wait 113489 00:10:44.675 00:10:44.675 real 0m4.013s 00:10:44.675 user 0m4.235s 00:10:44.675 sys 0m0.565s 00:10:44.675 13:53:33 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.675 13:53:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.675 ************************************ 00:10:44.675 END TEST alias_rpc 00:10:44.675 ************************************ 00:10:44.675 13:53:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:44.675 13:53:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:44.675 13:53:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:44.675 13:53:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.675 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:10:44.675 ************************************ 00:10:44.675 START TEST spdkcli_tcp 00:10:44.675 ************************************ 00:10:44.675 13:53:33 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:44.675 * Looking for test storage... 00:10:44.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:44.675 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:44.675 13:53:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.675 13:53:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.676 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=113602 00:10:44.676 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:44.676 13:53:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 113602 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 113602 ']' 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.676 13:53:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.676 [2024-07-25 13:53:33.409945] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:44.676 [2024-07-25 13:53:33.410179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113602 ] 00:10:44.676 [2024-07-25 13:53:33.590725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.934 [2024-07-25 13:53:33.815137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.934 [2024-07-25 13:53:33.815140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.868 13:53:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.868 13:53:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:10:45.868 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=113624 00:10:45.868 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:45.868 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:45.868 [ 00:10:45.868 "spdk_get_version", 00:10:45.868 "rpc_get_methods", 00:10:45.868 "keyring_get_keys", 00:10:45.868 "trace_get_info", 00:10:45.868 "trace_get_tpoint_group_mask", 00:10:45.868 "trace_disable_tpoint_group", 00:10:45.868 "trace_enable_tpoint_group", 00:10:45.868 "trace_clear_tpoint_mask", 00:10:45.868 "trace_set_tpoint_mask", 00:10:45.868 "framework_get_pci_devices", 00:10:45.868 "framework_get_config", 00:10:45.868 "framework_get_subsystems", 00:10:45.868 "iobuf_get_stats", 00:10:45.868 "iobuf_set_options", 00:10:45.868 "sock_get_default_impl", 00:10:45.868 "sock_set_default_impl", 00:10:45.868 "sock_impl_set_options", 00:10:45.868 "sock_impl_get_options", 00:10:45.868 "vmd_rescan", 00:10:45.868 "vmd_remove_device", 00:10:45.868 "vmd_enable", 00:10:45.868 "accel_get_stats", 00:10:45.868 "accel_set_options", 00:10:45.868 "accel_set_driver", 00:10:45.868 "accel_crypto_key_destroy", 00:10:45.868 "accel_crypto_keys_get", 00:10:45.868 "accel_crypto_key_create", 00:10:45.868 "accel_assign_opc", 00:10:45.868 "accel_get_module_info", 00:10:45.868 "accel_get_opc_assignments", 00:10:45.868 "notify_get_notifications", 00:10:45.868 "notify_get_types", 00:10:45.868 "bdev_get_histogram", 00:10:45.868 "bdev_enable_histogram", 00:10:45.868 "bdev_set_qos_limit", 00:10:45.868 "bdev_set_qd_sampling_period", 00:10:45.868 "bdev_get_bdevs", 00:10:45.868 "bdev_reset_iostat", 00:10:45.868 "bdev_get_iostat", 00:10:45.868 "bdev_examine", 00:10:45.868 "bdev_wait_for_examine", 00:10:45.868 "bdev_set_options", 00:10:45.868 "scsi_get_devices", 00:10:45.868 "thread_set_cpumask", 00:10:45.868 "framework_get_governor", 00:10:45.868 "framework_get_scheduler", 00:10:45.868 "framework_set_scheduler", 00:10:45.868 "framework_get_reactors", 00:10:45.868 "thread_get_io_channels", 00:10:45.868 "thread_get_pollers", 00:10:45.868 "thread_get_stats", 00:10:45.868 "framework_monitor_context_switch", 00:10:45.868 "spdk_kill_instance", 00:10:45.868 "log_enable_timestamps", 00:10:45.868 "log_get_flags", 00:10:45.868 "log_clear_flag", 00:10:45.868 "log_set_flag", 00:10:45.868 "log_get_level", 00:10:45.868 "log_set_level", 00:10:45.868 "log_get_print_level", 00:10:45.868 "log_set_print_level", 00:10:45.868 "framework_enable_cpumask_locks", 00:10:45.868 "framework_disable_cpumask_locks", 00:10:45.868 "framework_wait_init", 00:10:45.868 "framework_start_init", 00:10:45.868 "virtio_blk_create_transport", 00:10:45.868 "virtio_blk_get_transports", 00:10:45.868 "vhost_controller_set_coalescing", 00:10:45.868 "vhost_get_controllers", 00:10:45.868 "vhost_delete_controller", 00:10:45.868 "vhost_create_blk_controller", 00:10:45.868 "vhost_scsi_controller_remove_target", 00:10:45.868 "vhost_scsi_controller_add_target", 00:10:45.868 "vhost_start_scsi_controller", 00:10:45.868 "vhost_create_scsi_controller", 00:10:45.868 "nbd_get_disks", 00:10:45.868 "nbd_stop_disk", 00:10:45.868 "nbd_start_disk", 00:10:45.868 "env_dpdk_get_mem_stats", 00:10:45.868 "nvmf_stop_mdns_prr", 00:10:45.868 "nvmf_publish_mdns_prr", 00:10:45.868 "nvmf_subsystem_get_listeners", 00:10:45.868 "nvmf_subsystem_get_qpairs", 00:10:45.868 "nvmf_subsystem_get_controllers", 00:10:45.868 "nvmf_get_stats", 00:10:45.868 "nvmf_get_transports", 00:10:45.868 "nvmf_create_transport", 00:10:45.868 "nvmf_get_targets", 00:10:45.868 "nvmf_delete_target", 00:10:45.868 "nvmf_create_target", 00:10:45.868 "nvmf_subsystem_allow_any_host", 00:10:45.868 "nvmf_subsystem_remove_host", 00:10:45.868 "nvmf_subsystem_add_host", 00:10:45.868 "nvmf_ns_remove_host", 00:10:45.868 "nvmf_ns_add_host", 00:10:45.868 "nvmf_subsystem_remove_ns", 00:10:45.868 "nvmf_subsystem_add_ns", 00:10:45.868 "nvmf_subsystem_listener_set_ana_state", 00:10:45.868 "nvmf_discovery_get_referrals", 00:10:45.868 "nvmf_discovery_remove_referral", 00:10:45.868 "nvmf_discovery_add_referral", 00:10:45.868 "nvmf_subsystem_remove_listener", 00:10:45.868 "nvmf_subsystem_add_listener", 00:10:45.868 "nvmf_delete_subsystem", 00:10:45.868 "nvmf_create_subsystem", 00:10:45.868 "nvmf_get_subsystems", 00:10:45.868 "nvmf_set_crdt", 00:10:45.868 "nvmf_set_config", 00:10:45.868 "nvmf_set_max_subsystems", 00:10:45.868 "iscsi_get_histogram", 00:10:45.868 "iscsi_enable_histogram", 00:10:45.868 "iscsi_set_options", 00:10:45.868 "iscsi_get_auth_groups", 00:10:45.868 "iscsi_auth_group_remove_secret", 00:10:45.868 "iscsi_auth_group_add_secret", 00:10:45.868 "iscsi_delete_auth_group", 00:10:45.868 "iscsi_create_auth_group", 00:10:45.868 "iscsi_set_discovery_auth", 00:10:45.868 "iscsi_get_options", 00:10:45.868 "iscsi_target_node_request_logout", 00:10:45.869 "iscsi_target_node_set_redirect", 00:10:45.869 "iscsi_target_node_set_auth", 00:10:45.869 "iscsi_target_node_add_lun", 00:10:45.869 "iscsi_get_stats", 00:10:45.869 "iscsi_get_connections", 00:10:45.869 "iscsi_portal_group_set_auth", 00:10:45.869 "iscsi_start_portal_group", 00:10:45.869 "iscsi_delete_portal_group", 00:10:45.869 "iscsi_create_portal_group", 00:10:45.869 "iscsi_get_portal_groups", 00:10:45.869 "iscsi_delete_target_node", 00:10:45.869 "iscsi_target_node_remove_pg_ig_maps", 00:10:45.869 "iscsi_target_node_add_pg_ig_maps", 00:10:45.869 "iscsi_create_target_node", 00:10:45.869 "iscsi_get_target_nodes", 00:10:45.869 "iscsi_delete_initiator_group", 00:10:45.869 "iscsi_initiator_group_remove_initiators", 00:10:45.869 "iscsi_initiator_group_add_initiators", 00:10:45.869 "iscsi_create_initiator_group", 00:10:45.869 "iscsi_get_initiator_groups", 00:10:45.869 "keyring_linux_set_options", 00:10:45.869 "keyring_file_remove_key", 00:10:45.869 "keyring_file_add_key", 00:10:45.869 "iaa_scan_accel_module", 00:10:45.869 "dsa_scan_accel_module", 00:10:45.869 "ioat_scan_accel_module", 00:10:45.869 "accel_error_inject_error", 00:10:45.869 "bdev_iscsi_delete", 00:10:45.869 "bdev_iscsi_create", 00:10:45.869 "bdev_iscsi_set_options", 00:10:45.869 "bdev_virtio_attach_controller", 00:10:45.869 "bdev_virtio_scsi_get_devices", 00:10:45.869 "bdev_virtio_detach_controller", 00:10:45.869 "bdev_virtio_blk_set_hotplug", 00:10:45.869 "bdev_ftl_set_property", 00:10:45.869 "bdev_ftl_get_properties", 00:10:45.869 "bdev_ftl_get_stats", 00:10:45.869 "bdev_ftl_unmap", 00:10:45.869 "bdev_ftl_unload", 00:10:45.869 "bdev_ftl_delete", 00:10:45.869 "bdev_ftl_load", 00:10:45.869 "bdev_ftl_create", 00:10:45.869 "bdev_aio_delete", 00:10:45.869 "bdev_aio_rescan", 00:10:45.869 "bdev_aio_create", 00:10:45.869 "blobfs_create", 00:10:45.869 "blobfs_detect", 00:10:45.869 "blobfs_set_cache_size", 00:10:45.869 "bdev_zone_block_delete", 00:10:45.869 "bdev_zone_block_create", 00:10:45.869 "bdev_delay_delete", 00:10:45.869 "bdev_delay_create", 00:10:45.869 "bdev_delay_update_latency", 00:10:45.869 "bdev_split_delete", 00:10:45.869 "bdev_split_create", 00:10:45.869 "bdev_error_inject_error", 00:10:45.869 "bdev_error_delete", 00:10:45.869 "bdev_error_create", 00:10:45.869 "bdev_raid_set_options", 00:10:45.869 "bdev_raid_remove_base_bdev", 00:10:45.869 "bdev_raid_add_base_bdev", 00:10:45.869 "bdev_raid_delete", 00:10:45.869 "bdev_raid_create", 00:10:45.869 "bdev_raid_get_bdevs", 00:10:45.869 "bdev_lvol_set_parent_bdev", 00:10:45.869 "bdev_lvol_set_parent", 00:10:45.869 "bdev_lvol_check_shallow_copy", 00:10:45.869 "bdev_lvol_start_shallow_copy", 00:10:45.869 "bdev_lvol_grow_lvstore", 00:10:45.869 "bdev_lvol_get_lvols", 00:10:45.869 "bdev_lvol_get_lvstores", 00:10:45.869 "bdev_lvol_delete", 00:10:45.869 "bdev_lvol_set_read_only", 00:10:45.869 "bdev_lvol_resize", 00:10:45.869 "bdev_lvol_decouple_parent", 00:10:45.869 "bdev_lvol_inflate", 00:10:45.869 "bdev_lvol_rename", 00:10:45.869 "bdev_lvol_clone_bdev", 00:10:45.869 "bdev_lvol_clone", 00:10:45.869 "bdev_lvol_snapshot", 00:10:45.869 "bdev_lvol_create", 00:10:45.869 "bdev_lvol_delete_lvstore", 00:10:45.869 "bdev_lvol_rename_lvstore", 00:10:45.869 "bdev_lvol_create_lvstore", 00:10:45.869 "bdev_passthru_delete", 00:10:45.869 "bdev_passthru_create", 00:10:45.869 "bdev_nvme_cuse_unregister", 00:10:45.869 "bdev_nvme_cuse_register", 00:10:45.869 "bdev_opal_new_user", 00:10:45.869 "bdev_opal_set_lock_state", 00:10:45.869 "bdev_opal_delete", 00:10:45.869 "bdev_opal_get_info", 00:10:45.869 "bdev_opal_create", 00:10:45.869 "bdev_nvme_opal_revert", 00:10:45.869 "bdev_nvme_opal_init", 00:10:45.869 "bdev_nvme_send_cmd", 00:10:45.869 "bdev_nvme_get_path_iostat", 00:10:45.869 "bdev_nvme_get_mdns_discovery_info", 00:10:45.869 "bdev_nvme_stop_mdns_discovery", 00:10:45.869 "bdev_nvme_start_mdns_discovery", 00:10:45.869 "bdev_nvme_set_multipath_policy", 00:10:45.869 "bdev_nvme_set_preferred_path", 00:10:45.869 "bdev_nvme_get_io_paths", 00:10:45.869 "bdev_nvme_remove_error_injection", 00:10:45.869 "bdev_nvme_add_error_injection", 00:10:45.869 "bdev_nvme_get_discovery_info", 00:10:45.869 "bdev_nvme_stop_discovery", 00:10:45.869 "bdev_nvme_start_discovery", 00:10:45.869 "bdev_nvme_get_controller_health_info", 00:10:45.869 "bdev_nvme_disable_controller", 00:10:45.869 "bdev_nvme_enable_controller", 00:10:45.869 "bdev_nvme_reset_controller", 00:10:45.869 "bdev_nvme_get_transport_statistics", 00:10:45.869 "bdev_nvme_apply_firmware", 00:10:45.869 "bdev_nvme_detach_controller", 00:10:45.869 "bdev_nvme_get_controllers", 00:10:45.869 "bdev_nvme_attach_controller", 00:10:45.869 "bdev_nvme_set_hotplug", 00:10:45.869 "bdev_nvme_set_options", 00:10:45.869 "bdev_null_resize", 00:10:45.869 "bdev_null_delete", 00:10:45.869 "bdev_null_create", 00:10:45.869 "bdev_malloc_delete", 00:10:45.869 "bdev_malloc_create" 00:10:45.869 ] 00:10:46.128 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.128 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:46.128 13:53:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 113602 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 113602 ']' 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 113602 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113602 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.128 killing process with pid 113602 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113602' 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 113602 00:10:46.128 13:53:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 113602 00:10:48.659 00:10:48.659 real 0m3.896s 00:10:48.659 user 0m6.996s 00:10:48.659 sys 0m0.627s 00:10:48.659 13:53:37 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.659 ************************************ 00:10:48.659 END TEST spdkcli_tcp 00:10:48.659 ************************************ 00:10:48.659 13:53:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.659 13:53:37 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:48.659 13:53:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.659 13:53:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.659 13:53:37 -- common/autotest_common.sh@10 -- # set +x 00:10:48.659 ************************************ 00:10:48.659 START TEST dpdk_mem_utility 00:10:48.659 ************************************ 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:48.659 * Looking for test storage... 00:10:48.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:48.659 13:53:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:48.659 13:53:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=113728 00:10:48.659 13:53:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 113728 00:10:48.659 13:53:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 113728 ']' 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.659 13:53:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:48.659 [2024-07-25 13:53:37.327374] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:48.659 [2024-07-25 13:53:37.327589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113728 ] 00:10:48.659 [2024-07-25 13:53:37.488562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.918 [2024-07-25 13:53:37.710015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.484 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.484 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:10:49.484 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:49.484 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:49.484 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.484 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:49.484 { 00:10:49.484 "filename": "/tmp/spdk_mem_dump.txt" 00:10:49.484 } 00:10:49.484 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.484 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:49.744 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:49.744 1 heaps totaling size 820.000000 MiB 00:10:49.744 size: 820.000000 MiB heap id: 0 00:10:49.744 end heaps---------- 00:10:49.744 8 mempools totaling size 598.116089 MiB 00:10:49.744 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:49.744 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:49.744 size: 84.521057 MiB name: bdev_io_113728 00:10:49.744 size: 51.011292 MiB name: evtpool_113728 00:10:49.744 size: 50.003479 MiB name: msgpool_113728 00:10:49.744 size: 21.763794 MiB name: PDU_Pool 00:10:49.744 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:49.744 size: 0.026123 MiB name: Session_Pool 00:10:49.744 end mempools------- 00:10:49.744 6 memzones totaling size 4.142822 MiB 00:10:49.744 size: 1.000366 MiB name: RG_ring_0_113728 00:10:49.744 size: 1.000366 MiB name: RG_ring_1_113728 00:10:49.744 size: 1.000366 MiB name: RG_ring_4_113728 00:10:49.744 size: 1.000366 MiB name: RG_ring_5_113728 00:10:49.744 size: 0.125366 MiB name: RG_ring_2_113728 00:10:49.744 size: 0.015991 MiB name: RG_ring_3_113728 00:10:49.744 end memzones------- 00:10:49.744 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:49.744 heap id: 0 total size: 820.000000 MiB number of busy elements: 222 number of free elements: 18 00:10:49.744 list of free elements. size: 18.470703 MiB 00:10:49.744 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:49.744 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:49.744 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:49.744 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:49.744 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:49.744 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:49.744 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:49.744 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:49.744 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:49.744 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:49.744 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:49.744 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:49.744 element at address: 0x20001b000000 with size: 0.562195 MiB 00:10:49.744 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:49.744 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:49.744 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:49.744 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:49.744 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:49.744 list of standard malloc elements. size: 199.264893 MiB 00:10:49.744 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:49.744 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:49.744 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:49.744 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:49.744 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:49.744 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:49.744 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:49.744 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:49.744 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:49.744 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:49.744 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:49.744 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:49.744 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:49.744 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:49.744 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:49.745 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:49.745 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:49.745 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:49.745 list of memzone associated elements. size: 602.264404 MiB 00:10:49.745 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:49.745 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:49.745 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:49.745 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:49.745 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:49.745 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_113728_0 00:10:49.745 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:49.745 associated memzone info: size: 48.002930 MiB name: MP_evtpool_113728_0 00:10:49.745 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:49.745 associated memzone info: size: 48.002930 MiB name: MP_msgpool_113728_0 00:10:49.745 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:49.745 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:49.745 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:49.745 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:49.745 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:49.745 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_113728 00:10:49.745 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:49.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_113728 00:10:49.746 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:49.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_113728 00:10:49.746 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:49.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:49.746 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:49.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:49.746 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:49.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:49.746 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:49.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:49.746 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:49.746 associated memzone info: size: 1.000366 MiB name: RG_ring_0_113728 00:10:49.746 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:49.746 associated memzone info: size: 1.000366 MiB name: RG_ring_1_113728 00:10:49.746 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:49.746 associated memzone info: size: 1.000366 MiB name: RG_ring_4_113728 00:10:49.746 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:49.746 associated memzone info: size: 1.000366 MiB name: RG_ring_5_113728 00:10:49.746 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:49.746 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_113728 00:10:49.746 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:49.746 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:49.746 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:49.746 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:49.746 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:49.746 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:49.746 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:49.746 associated memzone info: size: 0.125366 MiB name: RG_ring_2_113728 00:10:49.746 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:49.746 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:49.746 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:49.746 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:49.746 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:49.746 associated memzone info: size: 0.015991 MiB name: RG_ring_3_113728 00:10:49.746 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:49.746 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:49.746 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:49.746 associated memzone info: size: 0.000183 MiB name: MP_msgpool_113728 00:10:49.746 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:49.746 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_113728 00:10:49.746 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:49.746 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:49.746 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:49.746 13:53:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 113728 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 113728 ']' 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 113728 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113728 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.746 killing process with pid 113728 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113728' 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 113728 00:10:49.746 13:53:38 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 113728 00:10:52.272 00:10:52.272 real 0m3.651s 00:10:52.272 user 0m3.715s 00:10:52.272 sys 0m0.533s 00:10:52.272 ************************************ 00:10:52.272 END TEST dpdk_mem_utility 00:10:52.272 ************************************ 00:10:52.272 13:53:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.272 13:53:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:52.272 13:53:40 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:52.272 13:53:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:52.272 13:53:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.272 13:53:40 -- common/autotest_common.sh@10 -- # set +x 00:10:52.272 ************************************ 00:10:52.272 START TEST event 00:10:52.272 ************************************ 00:10:52.272 13:53:40 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:52.272 * Looking for test storage... 00:10:52.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:52.272 13:53:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:52.272 13:53:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:52.272 13:53:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.272 13:53:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:52.272 13:53:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.272 13:53:40 event -- common/autotest_common.sh@10 -- # set +x 00:10:52.272 ************************************ 00:10:52.272 START TEST event_perf 00:10:52.272 ************************************ 00:10:52.272 13:53:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.272 Running I/O for 1 seconds...[2024-07-25 13:53:41.028730] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:52.272 [2024-07-25 13:53:41.029112] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113830 ] 00:10:52.272 [2024-07-25 13:53:41.223188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.529 [2024-07-25 13:53:41.448273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.529 [2024-07-25 13:53:41.448407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.529 [2024-07-25 13:53:41.448556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.530 [2024-07-25 13:53:41.448770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.904 Running I/O for 1 seconds... 00:10:53.904 lcore 0: 199830 00:10:53.904 lcore 1: 199828 00:10:53.904 lcore 2: 199831 00:10:53.904 lcore 3: 199832 00:10:53.904 done. 00:10:53.904 ************************************ 00:10:53.904 END TEST event_perf 00:10:53.904 ************************************ 00:10:53.904 00:10:53.904 real 0m1.857s 00:10:53.904 user 0m4.607s 00:10:53.904 sys 0m0.148s 00:10:53.904 13:53:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.904 13:53:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:53.904 13:53:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:53.904 13:53:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:53.904 13:53:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.904 13:53:42 event -- common/autotest_common.sh@10 -- # set +x 00:10:53.904 ************************************ 00:10:53.904 START TEST event_reactor 00:10:53.904 ************************************ 00:10:53.904 13:53:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:53.904 [2024-07-25 13:53:42.931092] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:53.904 [2024-07-25 13:53:42.931330] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113885 ] 00:10:54.163 [2024-07-25 13:53:43.098426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.421 [2024-07-25 13:53:43.298020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.797 test_start 00:10:55.797 oneshot 00:10:55.797 tick 100 00:10:55.797 tick 100 00:10:55.797 tick 250 00:10:55.797 tick 100 00:10:55.797 tick 100 00:10:55.797 tick 100 00:10:55.797 tick 250 00:10:55.797 tick 500 00:10:55.797 tick 100 00:10:55.797 tick 100 00:10:55.797 tick 250 00:10:55.797 tick 100 00:10:55.797 tick 100 00:10:55.797 test_end 00:10:55.797 00:10:55.797 real 0m1.776s 00:10:55.797 user 0m1.559s 00:10:55.797 sys 0m0.116s 00:10:55.797 13:53:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.797 13:53:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:55.797 ************************************ 00:10:55.797 END TEST event_reactor 00:10:55.797 ************************************ 00:10:55.797 13:53:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:55.797 13:53:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:55.797 13:53:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.797 13:53:44 event -- common/autotest_common.sh@10 -- # set +x 00:10:55.797 ************************************ 00:10:55.797 START TEST event_reactor_perf 00:10:55.797 ************************************ 00:10:55.797 13:53:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:55.797 [2024-07-25 13:53:44.758574] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:55.797 [2024-07-25 13:53:44.758841] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113935 ] 00:10:56.056 [2024-07-25 13:53:44.928555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.314 [2024-07-25 13:53:45.133441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.688 test_start 00:10:57.688 test_end 00:10:57.688 Performance: 348813 events per second 00:10:57.688 00:10:57.688 real 0m1.776s 00:10:57.688 user 0m1.540s 00:10:57.688 sys 0m0.136s 00:10:57.688 13:53:46 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.688 ************************************ 00:10:57.688 END TEST event_reactor_perf 00:10:57.688 13:53:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:57.688 ************************************ 00:10:57.688 13:53:46 event -- event/event.sh@49 -- # uname -s 00:10:57.688 13:53:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:57.688 13:53:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:57.688 13:53:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:57.688 13:53:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.688 13:53:46 event -- common/autotest_common.sh@10 -- # set +x 00:10:57.688 ************************************ 00:10:57.688 START TEST event_scheduler 00:10:57.688 ************************************ 00:10:57.688 13:53:46 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:57.688 * Looking for test storage... 00:10:57.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:57.688 13:53:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:57.688 13:53:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=114008 00:10:57.688 13:53:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:57.688 13:53:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:57.688 13:53:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 114008 00:10:57.688 13:53:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 114008 ']' 00:10:57.688 13:53:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.689 13:53:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.689 13:53:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.689 13:53:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.689 13:53:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:57.689 [2024-07-25 13:53:46.713399] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:10:57.689 [2024-07-25 13:53:46.713762] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114008 ] 00:10:57.948 [2024-07-25 13:53:46.904539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.208 [2024-07-25 13:53:47.158985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.208 [2024-07-25 13:53:47.159090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.208 [2024-07-25 13:53:47.159220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.208 [2024-07-25 13:53:47.159220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.774 13:53:47 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.774 13:53:47 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:10:58.774 13:53:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:58.774 13:53:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.774 13:53:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:58.775 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.775 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.775 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.775 POWER: Cannot set governor of lcore 0 to performance 00:10:58.775 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.775 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.775 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.775 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.775 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:58.775 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:58.775 POWER: Unable to set Power Management Environment for lcore 0 00:10:58.775 [2024-07-25 13:53:47.681671] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:58.775 [2024-07-25 13:53:47.681737] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:58.775 [2024-07-25 13:53:47.681783] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:58.775 [2024-07-25 13:53:47.681830] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:58.775 [2024-07-25 13:53:47.681865] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:58.775 [2024-07-25 13:53:47.681891] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:58.775 13:53:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.775 13:53:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:58.775 13:53:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.775 13:53:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 [2024-07-25 13:53:47.987792] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:59.128 13:53:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:59.128 13:53:47 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:59.128 13:53:47 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.128 13:53:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 ************************************ 00:10:59.128 START TEST scheduler_create_thread 00:10:59.128 ************************************ 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 2 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 3 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 4 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 5 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 6 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 7 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 8 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 9 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 10 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.128 13:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.063 13:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.063 13:53:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:00.063 13:53:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:00.063 13:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.063 13:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.438 13:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.438 00:11:01.438 real 0m2.151s 00:11:01.438 user 0m0.017s 00:11:01.438 sys 0m0.015s 00:11:01.438 13:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.438 13:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.438 ************************************ 00:11:01.438 END TEST scheduler_create_thread 00:11:01.438 ************************************ 00:11:01.438 13:53:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:01.438 13:53:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 114008 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 114008 ']' 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 114008 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114008 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:11:01.438 killing process with pid 114008 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114008' 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 114008 00:11:01.438 13:53:50 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 114008 00:11:01.697 [2024-07-25 13:53:50.633726] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:03.141 00:11:03.141 real 0m5.298s 00:11:03.141 user 0m8.731s 00:11:03.141 sys 0m0.457s 00:11:03.141 13:53:51 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.141 ************************************ 00:11:03.141 END TEST event_scheduler 00:11:03.141 ************************************ 00:11:03.141 13:53:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:03.141 13:53:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:03.141 13:53:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:03.141 13:53:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:03.141 13:53:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.141 13:53:51 event -- common/autotest_common.sh@10 -- # set +x 00:11:03.141 ************************************ 00:11:03.141 START TEST app_repeat 00:11:03.141 ************************************ 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=114133 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 114133' 00:11:03.141 Process app_repeat pid: 114133 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:03.141 spdk_app_start Round 0 00:11:03.141 13:53:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114133 /var/tmp/spdk-nbd.sock 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114133 ']' 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.141 13:53:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:03.141 [2024-07-25 13:53:51.956841] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:03.141 [2024-07-25 13:53:51.957038] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114133 ] 00:11:03.141 [2024-07-25 13:53:52.126845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.400 [2024-07-25 13:53:52.389372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.400 [2024-07-25 13:53:52.389381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.966 13:53:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.966 13:53:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:03.966 13:53:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.533 Malloc0 00:11:04.533 13:53:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:04.791 Malloc1 00:11:04.791 13:53:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.791 13:53:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:05.048 /dev/nbd0 00:11:05.048 13:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:05.048 13:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.048 1+0 records in 00:11:05.048 1+0 records out 00:11:05.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359074 s, 11.4 MB/s 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.048 13:53:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:05.048 13:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.048 13:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.048 13:53:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:05.306 /dev/nbd1 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:05.564 1+0 records in 00:11:05.564 1+0 records out 00:11:05.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793849 s, 5.2 MB/s 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:05.564 13:53:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.564 13:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:05.822 { 00:11:05.822 "nbd_device": "/dev/nbd0", 00:11:05.822 "bdev_name": "Malloc0" 00:11:05.822 }, 00:11:05.822 { 00:11:05.822 "nbd_device": "/dev/nbd1", 00:11:05.822 "bdev_name": "Malloc1" 00:11:05.822 } 00:11:05.822 ]' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:05.822 { 00:11:05.822 "nbd_device": "/dev/nbd0", 00:11:05.822 "bdev_name": "Malloc0" 00:11:05.822 }, 00:11:05.822 { 00:11:05.822 "nbd_device": "/dev/nbd1", 00:11:05.822 "bdev_name": "Malloc1" 00:11:05.822 } 00:11:05.822 ]' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:05.822 /dev/nbd1' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:05.822 /dev/nbd1' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:05.822 256+0 records in 00:11:05.822 256+0 records out 00:11:05.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00775001 s, 135 MB/s 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.822 13:53:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:05.822 256+0 records in 00:11:05.822 256+0 records out 00:11:05.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251514 s, 41.7 MB/s 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:05.823 256+0 records in 00:11:05.823 256+0 records out 00:11:05.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030284 s, 34.6 MB/s 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.823 13:53:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.081 13:53:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.339 13:53:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:06.907 13:53:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:06.907 13:53:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:07.166 13:53:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:08.544 [2024-07-25 13:53:57.341627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.544 [2024-07-25 13:53:57.546478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.544 [2024-07-25 13:53:57.546485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.802 [2024-07-25 13:53:57.738487] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:08.802 [2024-07-25 13:53:57.738798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:10.177 spdk_app_start Round 1 00:11:10.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.177 13:53:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:10.177 13:53:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:10.177 13:53:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114133 /var/tmp/spdk-nbd.sock 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114133 ']' 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.177 13:53:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.743 13:53:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.743 13:53:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:10.743 13:53:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.001 Malloc0 00:11:11.001 13:53:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.260 Malloc1 00:11:11.260 13:54:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.260 13:54:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:11.518 /dev/nbd0 00:11:11.518 13:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.518 13:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:11.518 1+0 records in 00:11:11.518 1+0 records out 00:11:11.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337138 s, 12.1 MB/s 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:11.518 13:54:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:11.518 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.518 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:11.518 13:54:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:11.775 /dev/nbd1 00:11:11.775 13:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:11.775 13:54:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:11.775 13:54:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.033 1+0 records in 00:11:12.033 1+0 records out 00:11:12.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768704 s, 5.3 MB/s 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:12.033 13:54:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:12.033 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.033 13:54:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.033 13:54:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.033 13:54:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.033 13:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:12.033 13:54:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:12.033 { 00:11:12.033 "nbd_device": "/dev/nbd0", 00:11:12.033 "bdev_name": "Malloc0" 00:11:12.033 }, 00:11:12.033 { 00:11:12.033 "nbd_device": "/dev/nbd1", 00:11:12.033 "bdev_name": "Malloc1" 00:11:12.033 } 00:11:12.033 ]' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:12.291 { 00:11:12.291 "nbd_device": "/dev/nbd0", 00:11:12.291 "bdev_name": "Malloc0" 00:11:12.291 }, 00:11:12.291 { 00:11:12.291 "nbd_device": "/dev/nbd1", 00:11:12.291 "bdev_name": "Malloc1" 00:11:12.291 } 00:11:12.291 ]' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:12.291 /dev/nbd1' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:12.291 /dev/nbd1' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:12.291 256+0 records in 00:11:12.291 256+0 records out 00:11:12.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723506 s, 145 MB/s 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:12.291 256+0 records in 00:11:12.291 256+0 records out 00:11:12.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027886 s, 37.6 MB/s 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:12.291 256+0 records in 00:11:12.291 256+0 records out 00:11:12.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354458 s, 29.6 MB/s 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.291 13:54:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:12.548 13:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.549 13:54:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.115 13:54:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:13.374 13:54:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:13.374 13:54:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:13.941 13:54:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:15.319 [2024-07-25 13:54:03.962548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.319 [2024-07-25 13:54:04.169167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.319 [2024-07-25 13:54:04.169172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.319 [2024-07-25 13:54:04.353667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:15.319 [2024-07-25 13:54:04.354111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:16.696 spdk_app_start Round 2 00:11:16.696 13:54:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:16.696 13:54:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:16.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:16.696 13:54:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114133 /var/tmp/spdk-nbd.sock 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114133 ']' 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.696 13:54:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:17.263 13:54:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.263 13:54:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:17.263 13:54:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:17.522 Malloc0 00:11:17.522 13:54:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:17.782 Malloc1 00:11:17.782 13:54:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.782 13:54:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:17.783 13:54:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.783 13:54:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:17.783 13:54:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.783 13:54:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:17.783 13:54:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:18.043 /dev/nbd0 00:11:18.043 13:54:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:18.043 13:54:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:18.043 1+0 records in 00:11:18.043 1+0 records out 00:11:18.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064382 s, 6.4 MB/s 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.043 13:54:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:18.043 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.043 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.043 13:54:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:18.302 /dev/nbd1 00:11:18.302 13:54:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:18.302 13:54:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:18.302 1+0 records in 00:11:18.302 1+0 records out 00:11:18.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534343 s, 7.7 MB/s 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:11:18.302 13:54:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.562 13:54:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.562 13:54:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:11:18.562 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.562 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.562 13:54:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:18.562 13:54:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.562 13:54:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:18.821 { 00:11:18.821 "nbd_device": "/dev/nbd0", 00:11:18.821 "bdev_name": "Malloc0" 00:11:18.821 }, 00:11:18.821 { 00:11:18.821 "nbd_device": "/dev/nbd1", 00:11:18.821 "bdev_name": "Malloc1" 00:11:18.821 } 00:11:18.821 ]' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:18.821 { 00:11:18.821 "nbd_device": "/dev/nbd0", 00:11:18.821 "bdev_name": "Malloc0" 00:11:18.821 }, 00:11:18.821 { 00:11:18.821 "nbd_device": "/dev/nbd1", 00:11:18.821 "bdev_name": "Malloc1" 00:11:18.821 } 00:11:18.821 ]' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:18.821 /dev/nbd1' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:18.821 /dev/nbd1' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:18.821 256+0 records in 00:11:18.821 256+0 records out 00:11:18.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805686 s, 130 MB/s 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:18.821 256+0 records in 00:11:18.821 256+0 records out 00:11:18.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244185 s, 42.9 MB/s 00:11:18.821 13:54:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:18.822 256+0 records in 00:11:18.822 256+0 records out 00:11:18.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266955 s, 39.3 MB/s 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.822 13:54:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.081 13:54:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.648 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.001 13:54:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.001 13:54:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:20.260 13:54:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:21.636 [2024-07-25 13:54:10.363455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.636 [2024-07-25 13:54:10.577178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.636 [2024-07-25 13:54:10.577184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.893 [2024-07-25 13:54:10.767592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:21.894 [2024-07-25 13:54:10.767919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:23.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:23.268 13:54:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 114133 /var/tmp/spdk-nbd.sock 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114133 ']' 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.268 13:54:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:11:23.527 13:54:12 event.app_repeat -- event/event.sh@39 -- # killprocess 114133 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 114133 ']' 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 114133 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114133 00:11:23.527 killing process with pid 114133 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114133' 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 114133 00:11:23.527 13:54:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 114133 00:11:24.908 spdk_app_start is called in Round 0. 00:11:24.908 Shutdown signal received, stop current app iteration 00:11:24.908 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:11:24.908 spdk_app_start is called in Round 1. 00:11:24.908 Shutdown signal received, stop current app iteration 00:11:24.908 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:11:24.908 spdk_app_start is called in Round 2. 00:11:24.908 Shutdown signal received, stop current app iteration 00:11:24.908 Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 reinitialization... 00:11:24.908 spdk_app_start is called in Round 3. 00:11:24.908 Shutdown signal received, stop current app iteration 00:11:24.908 ************************************ 00:11:24.908 END TEST app_repeat 00:11:24.908 ************************************ 00:11:24.908 13:54:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:24.908 13:54:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:24.908 00:11:24.908 real 0m21.761s 00:11:24.908 user 0m47.354s 00:11:24.908 sys 0m3.001s 00:11:24.908 13:54:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.908 13:54:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:24.908 13:54:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:24.908 13:54:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:24.908 13:54:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.908 13:54:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.908 13:54:13 event -- common/autotest_common.sh@10 -- # set +x 00:11:24.908 ************************************ 00:11:24.908 START TEST cpu_locks 00:11:24.908 ************************************ 00:11:24.908 13:54:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:24.908 * Looking for test storage... 00:11:24.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:24.909 13:54:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:24.909 13:54:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:24.909 13:54:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:24.909 13:54:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:24.909 13:54:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.909 13:54:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.909 13:54:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:24.909 ************************************ 00:11:24.909 START TEST default_locks 00:11:24.909 ************************************ 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114679 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 114679 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 114679 ']' 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.909 13:54:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:24.909 [2024-07-25 13:54:13.889990] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:24.909 [2024-07-25 13:54:13.890454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114679 ] 00:11:25.168 [2024-07-25 13:54:14.060058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.426 [2024-07-25 13:54:14.282686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.363 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.363 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:11:26.363 13:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 114679 00:11:26.363 13:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 114679 00:11:26.363 13:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 114679 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 114679 ']' 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 114679 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114679 00:11:26.622 killing process with pid 114679 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114679' 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 114679 00:11:26.622 13:54:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 114679 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114679 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 114679 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:29.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.210 ERROR: process (pid: 114679) is no longer running 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 114679 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 114679 ']' 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (114679) - No such process 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:29.210 ************************************ 00:11:29.210 END TEST default_locks 00:11:29.210 ************************************ 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:29.210 00:11:29.210 real 0m4.135s 00:11:29.210 user 0m4.217s 00:11:29.210 sys 0m0.688s 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.210 13:54:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.210 13:54:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:29.210 13:54:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:29.210 13:54:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.210 13:54:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.210 ************************************ 00:11:29.210 START TEST default_locks_via_rpc 00:11:29.210 ************************************ 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114764 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 114764 00:11:29.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 114764 ']' 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.210 13:54:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.210 [2024-07-25 13:54:18.058783] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:29.210 [2024-07-25 13:54:18.059276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114764 ] 00:11:29.210 [2024-07-25 13:54:18.230161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.469 [2024-07-25 13:54:18.509159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.403 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.661 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.661 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 114764 00:11:30.661 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 114764 00:11:30.661 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 114764 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 114764 ']' 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 114764 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114764 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114764' 00:11:30.918 killing process with pid 114764 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 114764 00:11:30.918 13:54:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 114764 00:11:33.445 ************************************ 00:11:33.445 END TEST default_locks_via_rpc 00:11:33.445 ************************************ 00:11:33.445 00:11:33.445 real 0m3.879s 00:11:33.445 user 0m3.998s 00:11:33.445 sys 0m0.666s 00:11:33.445 13:54:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.445 13:54:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.445 13:54:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:33.445 13:54:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:33.445 13:54:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.445 13:54:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:33.445 ************************************ 00:11:33.445 START TEST non_locking_app_on_locked_coremask 00:11:33.445 ************************************ 00:11:33.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114843 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 114843 /var/tmp/spdk.sock 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114843 ']' 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.446 13:54:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:33.446 [2024-07-25 13:54:21.994385] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:33.446 [2024-07-25 13:54:21.994891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114843 ] 00:11:33.446 [2024-07-25 13:54:22.166213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.446 [2024-07-25 13:54:22.375158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114866 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 114866 /var/tmp/spdk2.sock 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114866 ']' 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.377 13:54:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.377 [2024-07-25 13:54:23.216559] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:34.377 [2024-07-25 13:54:23.216920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114866 ] 00:11:34.377 [2024-07-25 13:54:23.371541] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:34.377 [2024-07-25 13:54:23.371647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.939 [2024-07-25 13:54:23.815635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 114843 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114843 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 114843 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114843 ']' 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 114843 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.463 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114843 00:11:37.722 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.723 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.723 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114843' 00:11:37.723 killing process with pid 114843 00:11:37.723 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 114843 00:11:37.723 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 114843 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 114866 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114866 ']' 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 114866 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114866 00:11:42.986 killing process with pid 114866 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114866' 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 114866 00:11:42.986 13:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 114866 00:11:44.431 ************************************ 00:11:44.431 END TEST non_locking_app_on_locked_coremask 00:11:44.431 ************************************ 00:11:44.431 00:11:44.431 real 0m11.207s 00:11:44.431 user 0m11.827s 00:11:44.431 sys 0m1.373s 00:11:44.431 13:54:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.431 13:54:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.431 13:54:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:44.431 13:54:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:44.431 13:54:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.431 13:54:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.431 ************************************ 00:11:44.431 START TEST locking_app_on_unlocked_coremask 00:11:44.431 ************************************ 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=115029 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 115029 /var/tmp/spdk.sock 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115029 ']' 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.431 13:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.431 [2024-07-25 13:54:33.252781] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:44.431 [2024-07-25 13:54:33.253250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115029 ] 00:11:44.431 [2024-07-25 13:54:33.419533] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:44.431 [2024-07-25 13:54:33.419875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.690 [2024-07-25 13:54:33.633391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:45.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=115049 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 115049 /var/tmp/spdk2.sock 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115049 ']' 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.622 13:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.622 [2024-07-25 13:54:34.536073] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:45.622 [2024-07-25 13:54:34.536586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115049 ] 00:11:45.879 [2024-07-25 13:54:34.713219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.136 [2024-07-25 13:54:35.130728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.666 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.666 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:48.666 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 115049 00:11:48.666 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115049 00:11:48.666 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 115029 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 115029 ']' 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 115029 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115029 00:11:48.954 killing process with pid 115029 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115029' 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 115029 00:11:48.954 13:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 115029 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 115049 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 115049 ']' 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 115049 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115049 00:11:53.142 killing process with pid 115049 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115049' 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 115049 00:11:53.142 13:54:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 115049 00:11:55.684 ************************************ 00:11:55.684 END TEST locking_app_on_unlocked_coremask 00:11:55.684 ************************************ 00:11:55.684 00:11:55.684 real 0m11.226s 00:11:55.684 user 0m11.842s 00:11:55.684 sys 0m1.339s 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:55.684 13:54:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:55.684 13:54:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:55.684 13:54:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.684 13:54:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:55.684 ************************************ 00:11:55.684 START TEST locking_app_on_locked_coremask 00:11:55.684 ************************************ 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=115207 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 115207 /var/tmp/spdk.sock 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115207 ']' 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.684 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.685 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.685 13:54:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:55.685 [2024-07-25 13:54:44.525218] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:55.685 [2024-07-25 13:54:44.525634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115207 ] 00:11:55.685 [2024-07-25 13:54:44.678881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.943 [2024-07-25 13:54:44.886003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=115228 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 115228 /var/tmp/spdk2.sock 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 115228 /var/tmp/spdk2.sock 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 115228 /var/tmp/spdk2.sock 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115228 ']' 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:56.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:56.880 13:54:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:56.880 [2024-07-25 13:54:45.827721] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:11:56.880 [2024-07-25 13:54:45.828355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115228 ] 00:11:57.139 [2024-07-25 13:54:46.017136] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 115207 has claimed it. 00:11:57.139 [2024-07-25 13:54:46.017244] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:57.707 ERROR: process (pid: 115228) is no longer running 00:11:57.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (115228) - No such process 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 115207 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115207 00:11:57.707 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 115207 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 115207 ']' 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 115207 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115207 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115207' 00:11:58.013 killing process with pid 115207 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 115207 00:11:58.013 13:54:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 115207 00:12:00.544 00:12:00.544 real 0m4.612s 00:12:00.544 user 0m5.055s 00:12:00.544 sys 0m0.845s 00:12:00.544 ************************************ 00:12:00.544 END TEST locking_app_on_locked_coremask 00:12:00.544 ************************************ 00:12:00.544 13:54:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.544 13:54:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:00.544 13:54:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:00.544 13:54:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.544 13:54:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.544 13:54:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:00.544 ************************************ 00:12:00.544 START TEST locking_overlapped_coremask 00:12:00.544 ************************************ 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=115297 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 115297 /var/tmp/spdk.sock 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 115297 ']' 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.544 13:54:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:00.544 [2024-07-25 13:54:49.198573] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:00.544 [2024-07-25 13:54:49.198975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115297 ] 00:12:00.544 [2024-07-25 13:54:49.377423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:00.802 [2024-07-25 13:54:49.607814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.802 [2024-07-25 13:54:49.607882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.802 [2024-07-25 13:54:49.607884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=115327 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 115327 /var/tmp/spdk2.sock 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 115327 /var/tmp/spdk2.sock 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 115327 /var/tmp/spdk2.sock 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 115327 ']' 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:01.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.736 13:54:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:01.736 [2024-07-25 13:54:50.615256] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:01.736 [2024-07-25 13:54:50.615751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115327 ] 00:12:01.995 [2024-07-25 13:54:50.820282] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115297 has claimed it. 00:12:01.995 [2024-07-25 13:54:50.820627] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:02.635 ERROR: process (pid: 115327) is no longer running 00:12:02.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (115327) - No such process 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 115297 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 115297 ']' 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 115297 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115297 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.635 killing process with pid 115297 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115297' 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 115297 00:12:02.635 13:54:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 115297 00:12:05.166 00:12:05.166 real 0m4.757s 00:12:05.166 user 0m12.607s 00:12:05.166 sys 0m0.687s 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:05.166 ************************************ 00:12:05.166 END TEST locking_overlapped_coremask 00:12:05.166 ************************************ 00:12:05.166 13:54:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:05.166 13:54:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:05.166 13:54:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.166 13:54:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:05.166 ************************************ 00:12:05.166 START TEST locking_overlapped_coremask_via_rpc 00:12:05.166 ************************************ 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=115398 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 115398 /var/tmp/spdk.sock 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115398 ']' 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.166 13:54:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.166 [2024-07-25 13:54:54.018466] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:05.166 [2024-07-25 13:54:54.018961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115398 ] 00:12:05.166 [2024-07-25 13:54:54.196216] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:05.166 [2024-07-25 13:54:54.196496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:05.425 [2024-07-25 13:54:54.454050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.425 [2024-07-25 13:54:54.454200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.425 [2024-07-25 13:54:54.454211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=115426 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 115426 /var/tmp/spdk2.sock 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115426 ']' 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:06.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.422 13:54:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.422 [2024-07-25 13:54:55.392755] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:06.422 [2024-07-25 13:54:55.393153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115426 ] 00:12:06.700 [2024-07-25 13:54:55.572511] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:06.700 [2024-07-25 13:54:55.585823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.003 [2024-07-25 13:54:56.037952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.261 [2024-07-25 13:54:56.053987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.261 [2024-07-25 13:54:56.053987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.160 [2024-07-25 13:54:58.184950] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115398 has claimed it. 00:12:09.160 request: 00:12:09.160 { 00:12:09.160 "method": "framework_enable_cpumask_locks", 00:12:09.160 "req_id": 1 00:12:09.160 } 00:12:09.160 Got JSON-RPC error response 00:12:09.160 response: 00:12:09.160 { 00:12:09.160 "code": -32603, 00:12:09.160 "message": "Failed to claim CPU core: 2" 00:12:09.160 } 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 115398 /var/tmp/spdk.sock 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115398 ']' 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.160 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 115426 /var/tmp/spdk2.sock 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115426 ']' 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:09.418 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:09.419 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.419 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:09.985 00:12:09.985 real 0m4.818s 00:12:09.985 user 0m1.843s 00:12:09.985 sys 0m0.172s 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.985 13:54:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.985 ************************************ 00:12:09.985 END TEST locking_overlapped_coremask_via_rpc 00:12:09.985 ************************************ 00:12:09.985 13:54:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:09.985 13:54:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115398 ]] 00:12:09.985 13:54:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115398 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115398 ']' 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115398 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115398 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.985 killing process with pid 115398 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115398' 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 115398 00:12:09.985 13:54:58 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 115398 00:12:12.533 13:55:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115426 ]] 00:12:12.534 13:55:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115426 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115426 ']' 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115426 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115426 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:12.534 killing process with pid 115426 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115426' 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 115426 00:12:12.534 13:55:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 115426 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:15.067 Process with pid 115398 is not found 00:12:15.067 Process with pid 115426 is not found 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115398 ]] 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115398 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115398 ']' 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115398 00:12:15.067 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (115398) - No such process 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 115398 is not found' 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115426 ]] 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115426 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115426 ']' 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115426 00:12:15.067 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (115426) - No such process 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 115426 is not found' 00:12:15.067 13:55:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:15.067 ************************************ 00:12:15.067 END TEST cpu_locks 00:12:15.067 ************************************ 00:12:15.067 00:12:15.067 real 0m50.105s 00:12:15.067 user 1m28.342s 00:12:15.067 sys 0m6.851s 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.067 13:55:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 ************************************ 00:12:15.067 END TEST event 00:12:15.067 ************************************ 00:12:15.067 00:12:15.067 real 1m22.976s 00:12:15.067 user 2m32.357s 00:12:15.067 sys 0m10.873s 00:12:15.067 13:55:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.067 13:55:03 event -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 13:55:03 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:15.067 13:55:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.067 13:55:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.067 13:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 ************************************ 00:12:15.067 START TEST thread 00:12:15.067 ************************************ 00:12:15.067 13:55:03 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:15.067 * Looking for test storage... 00:12:15.067 ************************************ 00:12:15.067 START TEST thread_poller_perf 00:12:15.067 ************************************ 00:12:15.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:15.067 13:55:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:15.067 13:55:04 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:12:15.067 13:55:04 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.067 13:55:04 thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 13:55:04 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:15.067 [2024-07-25 13:55:04.071219] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:15.067 [2024-07-25 13:55:04.071625] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115632 ] 00:12:15.326 [2024-07-25 13:55:04.243886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.584 [2024-07-25 13:55:04.512340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.584 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:16.959 ====================================== 00:12:16.959 busy:2209032721 (cyc) 00:12:16.959 total_run_count: 294000 00:12:16.959 tsc_hz: 2200000000 (cyc) 00:12:16.959 ====================================== 00:12:16.959 poller_cost: 7513 (cyc), 3415 (nsec) 00:12:16.959 ************************************ 00:12:16.959 END TEST thread_poller_perf 00:12:16.959 ************************************ 00:12:16.959 00:12:16.959 real 0m1.926s 00:12:16.959 user 0m1.696s 00:12:16.959 sys 0m0.128s 00:12:16.959 13:55:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.959 13:55:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:16.959 13:55:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:16.959 13:55:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:12:16.959 13:55:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.959 13:55:05 thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.217 ************************************ 00:12:17.217 START TEST thread_poller_perf 00:12:17.217 ************************************ 00:12:17.217 13:55:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:17.217 [2024-07-25 13:55:06.056525] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:17.217 [2024-07-25 13:55:06.057494] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115682 ] 00:12:17.217 [2024-07-25 13:55:06.237013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.476 [2024-07-25 13:55:06.504361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.476 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:19.376 ====================================== 00:12:19.376 busy:2205148709 (cyc) 00:12:19.376 total_run_count: 3903000 00:12:19.376 tsc_hz: 2200000000 (cyc) 00:12:19.376 ====================================== 00:12:19.376 poller_cost: 564 (cyc), 256 (nsec) 00:12:19.376 ************************************ 00:12:19.376 END TEST thread_poller_perf 00:12:19.376 ************************************ 00:12:19.376 00:12:19.376 real 0m1.902s 00:12:19.376 user 0m1.687s 00:12:19.376 sys 0m0.113s 00:12:19.376 13:55:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.376 13:55:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:19.376 13:55:07 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:19.376 13:55:07 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:19.376 13:55:07 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:19.376 13:55:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.376 13:55:07 thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.376 ************************************ 00:12:19.376 START TEST thread_spdk_lock 00:12:19.376 ************************************ 00:12:19.376 13:55:07 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:19.376 [2024-07-25 13:55:08.014365] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:19.376 [2024-07-25 13:55:08.014629] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115730 ] 00:12:19.376 [2024-07-25 13:55:08.194861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.634 [2024-07-25 13:55:08.501878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.634 [2024-07-25 13:55:08.501884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.201 [2024-07-25 13:55:09.209911] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:20.201 [2024-07-25 13:55:09.211797] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:20.201 [2024-07-25 13:55:09.211906] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x55d60aba84c0 00:12:20.201 [2024-07-25 13:55:09.222002] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:20.201 [2024-07-25 13:55:09.222405] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:20.201 [2024-07-25 13:55:09.222637] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:20.768 Starting test contend 00:12:20.768 Worker Delay Wait us Hold us Total us 00:12:20.768 0 3 116541 219783 336325 00:12:20.768 1 5 35123 329237 364360 00:12:20.768 PASS test contend 00:12:20.768 Starting test hold_by_poller 00:12:20.768 PASS test hold_by_poller 00:12:20.768 Starting test hold_by_message 00:12:20.768 PASS test hold_by_message 00:12:20.768 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:20.768 100014 assertions passed 00:12:20.768 0 assertions failed 00:12:20.768 ************************************ 00:12:20.768 END TEST thread_spdk_lock 00:12:20.768 ************************************ 00:12:20.768 00:12:20.768 real 0m1.706s 00:12:20.768 user 0m2.188s 00:12:20.768 sys 0m0.136s 00:12:20.768 13:55:09 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.768 13:55:09 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 00:12:20.768 real 0m5.794s 00:12:20.768 user 0m5.703s 00:12:20.768 sys 0m0.499s 00:12:20.768 13:55:09 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.768 13:55:09 thread -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 ************************************ 00:12:20.768 END TEST thread 00:12:20.768 ************************************ 00:12:20.768 13:55:09 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:12:20.768 13:55:09 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:20.768 13:55:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:20.768 13:55:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.768 13:55:09 -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 ************************************ 00:12:20.768 START TEST app_cmdline 00:12:20.768 ************************************ 00:12:20.768 13:55:09 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:21.027 * Looking for test storage... 00:12:21.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:21.027 13:55:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:21.027 13:55:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=115826 00:12:21.027 13:55:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:21.027 13:55:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 115826 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 115826 ']' 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.027 13:55:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:21.027 [2024-07-25 13:55:09.925504] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:21.027 [2024-07-25 13:55:09.925742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115826 ] 00:12:21.285 [2024-07-25 13:55:10.100015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.544 [2024-07-25 13:55:10.364720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:22.480 { 00:12:22.480 "version": "SPDK v24.09-pre git sha1 50fa6ca31", 00:12:22.480 "fields": { 00:12:22.480 "major": 24, 00:12:22.480 "minor": 9, 00:12:22.480 "patch": 0, 00:12:22.480 "suffix": "-pre", 00:12:22.480 "commit": "50fa6ca31" 00:12:22.480 } 00:12:22.480 } 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:22.480 13:55:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:22.480 13:55:11 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:22.738 request: 00:12:22.738 { 00:12:22.738 "method": "env_dpdk_get_mem_stats", 00:12:22.738 "req_id": 1 00:12:22.738 } 00:12:22.738 Got JSON-RPC error response 00:12:22.738 response: 00:12:22.738 { 00:12:22.738 "code": -32601, 00:12:22.738 "message": "Method not found" 00:12:22.738 } 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:22.738 13:55:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 115826 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 115826 ']' 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 115826 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115826 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.738 killing process with pid 115826 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115826' 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@969 -- # kill 115826 00:12:22.738 13:55:11 app_cmdline -- common/autotest_common.sh@974 -- # wait 115826 00:12:25.266 00:12:25.266 real 0m4.189s 00:12:25.266 user 0m4.594s 00:12:25.266 sys 0m0.622s 00:12:25.266 13:55:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.266 13:55:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:25.266 ************************************ 00:12:25.266 END TEST app_cmdline 00:12:25.266 ************************************ 00:12:25.266 13:55:13 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:25.266 13:55:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:25.266 13:55:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.266 13:55:13 -- common/autotest_common.sh@10 -- # set +x 00:12:25.266 ************************************ 00:12:25.266 START TEST version 00:12:25.266 ************************************ 00:12:25.266 13:55:14 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:25.266 * Looking for test storage... 00:12:25.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:25.266 13:55:14 version -- app/version.sh@17 -- # get_header_version major 00:12:25.266 13:55:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # cut -f2 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # tr -d '"' 00:12:25.266 13:55:14 version -- app/version.sh@17 -- # major=24 00:12:25.266 13:55:14 version -- app/version.sh@18 -- # get_header_version minor 00:12:25.266 13:55:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # tr -d '"' 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # cut -f2 00:12:25.266 13:55:14 version -- app/version.sh@18 -- # minor=9 00:12:25.266 13:55:14 version -- app/version.sh@19 -- # get_header_version patch 00:12:25.266 13:55:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # cut -f2 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # tr -d '"' 00:12:25.266 13:55:14 version -- app/version.sh@19 -- # patch=0 00:12:25.266 13:55:14 version -- app/version.sh@20 -- # get_header_version suffix 00:12:25.266 13:55:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # cut -f2 00:12:25.266 13:55:14 version -- app/version.sh@14 -- # tr -d '"' 00:12:25.266 13:55:14 version -- app/version.sh@20 -- # suffix=-pre 00:12:25.266 13:55:14 version -- app/version.sh@22 -- # version=24.9 00:12:25.266 13:55:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:25.266 13:55:14 version -- app/version.sh@28 -- # version=24.9rc0 00:12:25.266 13:55:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:25.266 13:55:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:25.266 13:55:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:12:25.266 13:55:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:12:25.266 00:12:25.266 real 0m0.168s 00:12:25.266 user 0m0.111s 00:12:25.266 sys 0m0.095s 00:12:25.266 13:55:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.266 13:55:14 version -- common/autotest_common.sh@10 -- # set +x 00:12:25.266 ************************************ 00:12:25.266 END TEST version 00:12:25.266 ************************************ 00:12:25.266 13:55:14 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:12:25.266 13:55:14 -- spdk/autotest.sh@193 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:25.266 13:55:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:25.266 13:55:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.266 13:55:14 -- common/autotest_common.sh@10 -- # set +x 00:12:25.266 ************************************ 00:12:25.266 START TEST blockdev_general 00:12:25.266 ************************************ 00:12:25.266 13:55:14 blockdev_general -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:25.266 * Looking for test storage... 00:12:25.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:25.266 13:55:14 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:25.266 13:55:14 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116009 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 116009 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 116009 ']' 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.525 13:55:14 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.525 13:55:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:25.525 [2024-07-25 13:55:14.390655] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:25.525 [2024-07-25 13:55:14.390962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116009 ] 00:12:25.525 [2024-07-25 13:55:14.562935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.783 [2024-07-25 13:55:14.817956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.350 13:55:15 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.350 13:55:15 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:12:26.350 13:55:15 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:26.350 13:55:15 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:12:26.350 13:55:15 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:26.350 13:55:15 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.350 13:55:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.286 [2024-07-25 13:55:16.106070] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.286 [2024-07-25 13:55:16.106388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.286 00:12:27.286 [2024-07-25 13:55:16.114022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.286 [2024-07-25 13:55:16.114205] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.286 00:12:27.286 Malloc0 00:12:27.286 Malloc1 00:12:27.286 Malloc2 00:12:27.286 Malloc3 00:12:27.545 Malloc4 00:12:27.545 Malloc5 00:12:27.545 Malloc6 00:12:27.545 Malloc7 00:12:27.545 Malloc8 00:12:27.545 Malloc9 00:12:27.545 [2024-07-25 13:55:16.528177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:27.545 [2024-07-25 13:55:16.528433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.545 [2024-07-25 13:55:16.528519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:27.545 [2024-07-25 13:55:16.528823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.545 [2024-07-25 13:55:16.531598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.545 [2024-07-25 13:55:16.531798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:27.545 TestPT 00:12:27.545 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.545 13:55:16 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:27.805 5000+0 records in 00:12:27.805 5000+0 records out 00:12:27.805 10240000 bytes (10 MB, 9.8 MiB) copied, 0.028198 s, 363 MB/s 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 AIO0 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 13:55:16 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:27.806 13:55:16 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:27.807 13:55:16 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "93b690c7-7062-477c-a129-c7917ee07b27"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93b690c7-7062-477c-a129-c7917ee07b27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d36ececd-d3d9-5f5c-acb7-696e13d4b03a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d36ececd-d3d9-5f5c-acb7-696e13d4b03a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "99bf90de-b871-5804-88d3-75c35b33fd5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99bf90de-b871-5804-88d3-75c35b33fd5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d9fec1dc-541a-547a-85e9-75790b35b511"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9fec1dc-541a-547a-85e9-75790b35b511",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "67225ad3-f461-565f-b4e8-fa5701d4aa8e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67225ad3-f461-565f-b4e8-fa5701d4aa8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ef4688f2-d685-560d-99b3-5fa29a7cc185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef4688f2-d685-560d-99b3-5fa29a7cc185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "adad4d29-3ab2-5102-85ff-3d08cc704918"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "adad4d29-3ab2-5102-85ff-3d08cc704918",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "009e9488-d3d6-567f-9d49-31e1957cf8fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "009e9488-d3d6-567f-9d49-31e1957cf8fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8b0fffaa-41e6-5049-8ebc-f0cb01715734"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b0fffaa-41e6-5049-8ebc-f0cb01715734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5eec75fc-341f-581c-8223-d24bcd6b5797"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eec75fc-341f-581c-8223-d24bcd6b5797",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3c5f88ba-5bde-5c21-9692-08512aaac0e9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3c5f88ba-5bde-5c21-9692-08512aaac0e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b9484430-cff2-4918-be7b-dfcced59da87"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a48c5e83-1e48-4aed-9222-95e24500ad7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc3de3b6-7e8c-495e-a109-ddf687086163",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c845f0c0-4efc-41ab-a71c-b00d87cea294"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ed4e3357-2c86-46fd-b2aa-b23b8ae290f3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a8d53ec-8a73-4e17-95b2-4a5d49f5546c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b3c782e-b7c4-4038-a536-d0eef4a5ba21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d49393c9-9609-4b56-b4cc-aa268b56f636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "48c48637-a846-48c0-81e9-cf120a63503d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "48c48637-a846-48c0-81e9-cf120a63503d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:28.066 13:55:16 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:28.066 13:55:16 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:12:28.066 13:55:16 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:28.066 13:55:16 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 116009 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 116009 ']' 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 116009 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116009 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116009' 00:12:28.066 killing process with pid 116009 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@969 -- # kill 116009 00:12:28.066 13:55:16 blockdev_general -- common/autotest_common.sh@974 -- # wait 116009 00:12:31.351 13:55:19 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:31.351 13:55:19 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:31.351 13:55:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:31.351 13:55:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.351 13:55:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:31.351 ************************************ 00:12:31.351 START TEST bdev_hello_world 00:12:31.351 ************************************ 00:12:31.351 13:55:19 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:31.351 [2024-07-25 13:55:19.942812] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:31.351 [2024-07-25 13:55:19.943098] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116098 ] 00:12:31.351 [2024-07-25 13:55:20.104672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.351 [2024-07-25 13:55:20.313742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.918 [2024-07-25 13:55:20.701297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.918 [2024-07-25 13:55:20.701744] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.918 [2024-07-25 13:55:20.709229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.918 [2024-07-25 13:55:20.709440] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.918 [2024-07-25 13:55:20.717272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:31.918 [2024-07-25 13:55:20.717504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:31.918 [2024-07-25 13:55:20.717663] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:31.918 [2024-07-25 13:55:20.919651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:31.918 [2024-07-25 13:55:20.919963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.918 [2024-07-25 13:55:20.920129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:31.918 [2024-07-25 13:55:20.920267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.918 [2024-07-25 13:55:20.923005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.918 [2024-07-25 13:55:20.923189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:32.487 [2024-07-25 13:55:21.234928] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:32.487 [2024-07-25 13:55:21.235514] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:32.487 [2024-07-25 13:55:21.235916] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:32.487 [2024-07-25 13:55:21.236331] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:32.487 [2024-07-25 13:55:21.236746] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:32.487 [2024-07-25 13:55:21.237047] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:32.487 [2024-07-25 13:55:21.237396] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:32.487 00:12:32.487 [2024-07-25 13:55:21.237735] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:34.396 00:12:34.396 real 0m3.338s 00:12:34.396 user 0m2.744s 00:12:34.396 sys 0m0.430s 00:12:34.396 13:55:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.396 13:55:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:34.396 ************************************ 00:12:34.396 END TEST bdev_hello_world 00:12:34.396 ************************************ 00:12:34.396 13:55:23 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:34.396 13:55:23 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.396 13:55:23 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.396 13:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:34.396 ************************************ 00:12:34.396 START TEST bdev_bounds 00:12:34.396 ************************************ 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=116166 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:34.396 Process bdevio pid: 116166 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 116166' 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 116166 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 116166 ']' 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.396 13:55:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:34.396 [2024-07-25 13:55:23.336991] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:34.396 [2024-07-25 13:55:23.337189] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116166 ] 00:12:34.653 [2024-07-25 13:55:23.512452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.911 [2024-07-25 13:55:23.725232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.911 [2024-07-25 13:55:23.725303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.911 [2024-07-25 13:55:23.725309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.169 [2024-07-25 13:55:24.104772] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.169 [2024-07-25 13:55:24.105125] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.169 [2024-07-25 13:55:24.112715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.169 [2024-07-25 13:55:24.112915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.169 [2024-07-25 13:55:24.120723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:35.169 [2024-07-25 13:55:24.120933] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:35.169 [2024-07-25 13:55:24.121093] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:35.427 [2024-07-25 13:55:24.330026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:35.427 [2024-07-25 13:55:24.330349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.427 [2024-07-25 13:55:24.330517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:35.427 [2024-07-25 13:55:24.330654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.427 [2024-07-25 13:55:24.333631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.427 [2024-07-25 13:55:24.333837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:35.685 13:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.685 13:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:35.685 13:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:35.943 I/O targets: 00:12:35.943 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:35.943 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:35.943 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:35.943 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:35.943 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:35.943 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:35.943 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:35.943 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:35.943 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:35.943 00:12:35.943 00:12:35.943 CUnit - A unit testing framework for C - Version 2.1-3 00:12:35.943 http://cunit.sourceforge.net/ 00:12:35.943 00:12:35.943 00:12:35.943 Suite: bdevio tests on: AIO0 00:12:35.943 Test: blockdev write read block ...passed 00:12:35.943 Test: blockdev write zeroes read block ...passed 00:12:35.943 Test: blockdev write zeroes read no split ...passed 00:12:35.943 Test: blockdev write zeroes read split ...passed 00:12:35.943 Test: blockdev write zeroes read split partial ...passed 00:12:35.943 Test: blockdev reset ...passed 00:12:35.943 Test: blockdev write read 8 blocks ...passed 00:12:35.943 Test: blockdev write read size > 128k ...passed 00:12:35.943 Test: blockdev write read invalid size ...passed 00:12:35.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.943 Test: blockdev write read max offset ...passed 00:12:35.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.943 Test: blockdev writev readv 8 blocks ...passed 00:12:35.943 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.943 Test: blockdev writev readv block ...passed 00:12:35.943 Test: blockdev writev readv size > 128k ...passed 00:12:35.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.943 Test: blockdev comparev and writev ...passed 00:12:35.943 Test: blockdev nvme passthru rw ...passed 00:12:35.943 Test: blockdev nvme passthru vendor specific ...passed 00:12:35.943 Test: blockdev nvme admin passthru ...passed 00:12:35.943 Test: blockdev copy ...passed 00:12:35.943 Suite: bdevio tests on: raid1 00:12:35.943 Test: blockdev write read block ...passed 00:12:35.943 Test: blockdev write zeroes read block ...passed 00:12:35.943 Test: blockdev write zeroes read no split ...passed 00:12:35.943 Test: blockdev write zeroes read split ...passed 00:12:35.943 Test: blockdev write zeroes read split partial ...passed 00:12:35.943 Test: blockdev reset ...passed 00:12:35.943 Test: blockdev write read 8 blocks ...passed 00:12:35.943 Test: blockdev write read size > 128k ...passed 00:12:35.943 Test: blockdev write read invalid size ...passed 00:12:35.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.943 Test: blockdev write read max offset ...passed 00:12:35.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.943 Test: blockdev writev readv 8 blocks ...passed 00:12:35.943 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.943 Test: blockdev writev readv block ...passed 00:12:35.943 Test: blockdev writev readv size > 128k ...passed 00:12:35.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.943 Test: blockdev comparev and writev ...passed 00:12:35.943 Test: blockdev nvme passthru rw ...passed 00:12:35.943 Test: blockdev nvme passthru vendor specific ...passed 00:12:35.943 Test: blockdev nvme admin passthru ...passed 00:12:35.943 Test: blockdev copy ...passed 00:12:35.943 Suite: bdevio tests on: concat0 00:12:35.943 Test: blockdev write read block ...passed 00:12:35.943 Test: blockdev write zeroes read block ...passed 00:12:35.943 Test: blockdev write zeroes read no split ...passed 00:12:35.943 Test: blockdev write zeroes read split ...passed 00:12:36.202 Test: blockdev write zeroes read split partial ...passed 00:12:36.202 Test: blockdev reset ...passed 00:12:36.202 Test: blockdev write read 8 blocks ...passed 00:12:36.202 Test: blockdev write read size > 128k ...passed 00:12:36.202 Test: blockdev write read invalid size ...passed 00:12:36.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.202 Test: blockdev write read max offset ...passed 00:12:36.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.202 Test: blockdev writev readv 8 blocks ...passed 00:12:36.202 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.202 Test: blockdev writev readv block ...passed 00:12:36.202 Test: blockdev writev readv size > 128k ...passed 00:12:36.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.202 Test: blockdev comparev and writev ...passed 00:12:36.202 Test: blockdev nvme passthru rw ...passed 00:12:36.202 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.202 Test: blockdev nvme admin passthru ...passed 00:12:36.202 Test: blockdev copy ...passed 00:12:36.202 Suite: bdevio tests on: raid0 00:12:36.202 Test: blockdev write read block ...passed 00:12:36.202 Test: blockdev write zeroes read block ...passed 00:12:36.202 Test: blockdev write zeroes read no split ...passed 00:12:36.202 Test: blockdev write zeroes read split ...passed 00:12:36.202 Test: blockdev write zeroes read split partial ...passed 00:12:36.202 Test: blockdev reset ...passed 00:12:36.202 Test: blockdev write read 8 blocks ...passed 00:12:36.202 Test: blockdev write read size > 128k ...passed 00:12:36.202 Test: blockdev write read invalid size ...passed 00:12:36.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.202 Test: blockdev write read max offset ...passed 00:12:36.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.202 Test: blockdev writev readv 8 blocks ...passed 00:12:36.202 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.202 Test: blockdev writev readv block ...passed 00:12:36.202 Test: blockdev writev readv size > 128k ...passed 00:12:36.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.202 Test: blockdev comparev and writev ...passed 00:12:36.202 Test: blockdev nvme passthru rw ...passed 00:12:36.202 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.202 Test: blockdev nvme admin passthru ...passed 00:12:36.202 Test: blockdev copy ...passed 00:12:36.202 Suite: bdevio tests on: TestPT 00:12:36.202 Test: blockdev write read block ...passed 00:12:36.202 Test: blockdev write zeroes read block ...passed 00:12:36.202 Test: blockdev write zeroes read no split ...passed 00:12:36.202 Test: blockdev write zeroes read split ...passed 00:12:36.202 Test: blockdev write zeroes read split partial ...passed 00:12:36.202 Test: blockdev reset ...passed 00:12:36.202 Test: blockdev write read 8 blocks ...passed 00:12:36.202 Test: blockdev write read size > 128k ...passed 00:12:36.202 Test: blockdev write read invalid size ...passed 00:12:36.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.202 Test: blockdev write read max offset ...passed 00:12:36.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.202 Test: blockdev writev readv 8 blocks ...passed 00:12:36.202 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.202 Test: blockdev writev readv block ...passed 00:12:36.202 Test: blockdev writev readv size > 128k ...passed 00:12:36.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.202 Test: blockdev comparev and writev ...passed 00:12:36.202 Test: blockdev nvme passthru rw ...passed 00:12:36.202 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.202 Test: blockdev nvme admin passthru ...passed 00:12:36.202 Test: blockdev copy ...passed 00:12:36.202 Suite: bdevio tests on: Malloc2p7 00:12:36.202 Test: blockdev write read block ...passed 00:12:36.202 Test: blockdev write zeroes read block ...passed 00:12:36.202 Test: blockdev write zeroes read no split ...passed 00:12:36.202 Test: blockdev write zeroes read split ...passed 00:12:36.202 Test: blockdev write zeroes read split partial ...passed 00:12:36.202 Test: blockdev reset ...passed 00:12:36.202 Test: blockdev write read 8 blocks ...passed 00:12:36.202 Test: blockdev write read size > 128k ...passed 00:12:36.202 Test: blockdev write read invalid size ...passed 00:12:36.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.202 Test: blockdev write read max offset ...passed 00:12:36.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.202 Test: blockdev writev readv 8 blocks ...passed 00:12:36.202 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.202 Test: blockdev writev readv block ...passed 00:12:36.202 Test: blockdev writev readv size > 128k ...passed 00:12:36.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.202 Test: blockdev comparev and writev ...passed 00:12:36.202 Test: blockdev nvme passthru rw ...passed 00:12:36.202 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.202 Test: blockdev nvme admin passthru ...passed 00:12:36.202 Test: blockdev copy ...passed 00:12:36.202 Suite: bdevio tests on: Malloc2p6 00:12:36.202 Test: blockdev write read block ...passed 00:12:36.202 Test: blockdev write zeroes read block ...passed 00:12:36.202 Test: blockdev write zeroes read no split ...passed 00:12:36.202 Test: blockdev write zeroes read split ...passed 00:12:36.202 Test: blockdev write zeroes read split partial ...passed 00:12:36.202 Test: blockdev reset ...passed 00:12:36.202 Test: blockdev write read 8 blocks ...passed 00:12:36.202 Test: blockdev write read size > 128k ...passed 00:12:36.202 Test: blockdev write read invalid size ...passed 00:12:36.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.202 Test: blockdev write read max offset ...passed 00:12:36.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.202 Test: blockdev writev readv 8 blocks ...passed 00:12:36.460 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.460 Test: blockdev writev readv block ...passed 00:12:36.460 Test: blockdev writev readv size > 128k ...passed 00:12:36.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.460 Test: blockdev comparev and writev ...passed 00:12:36.460 Test: blockdev nvme passthru rw ...passed 00:12:36.460 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.460 Test: blockdev nvme admin passthru ...passed 00:12:36.460 Test: blockdev copy ...passed 00:12:36.460 Suite: bdevio tests on: Malloc2p5 00:12:36.460 Test: blockdev write read block ...passed 00:12:36.460 Test: blockdev write zeroes read block ...passed 00:12:36.460 Test: blockdev write zeroes read no split ...passed 00:12:36.460 Test: blockdev write zeroes read split ...passed 00:12:36.460 Test: blockdev write zeroes read split partial ...passed 00:12:36.460 Test: blockdev reset ...passed 00:12:36.460 Test: blockdev write read 8 blocks ...passed 00:12:36.460 Test: blockdev write read size > 128k ...passed 00:12:36.460 Test: blockdev write read invalid size ...passed 00:12:36.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.460 Test: blockdev write read max offset ...passed 00:12:36.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.460 Test: blockdev writev readv 8 blocks ...passed 00:12:36.460 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.460 Test: blockdev writev readv block ...passed 00:12:36.460 Test: blockdev writev readv size > 128k ...passed 00:12:36.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.460 Test: blockdev comparev and writev ...passed 00:12:36.460 Test: blockdev nvme passthru rw ...passed 00:12:36.460 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.460 Test: blockdev nvme admin passthru ...passed 00:12:36.460 Test: blockdev copy ...passed 00:12:36.460 Suite: bdevio tests on: Malloc2p4 00:12:36.460 Test: blockdev write read block ...passed 00:12:36.460 Test: blockdev write zeroes read block ...passed 00:12:36.460 Test: blockdev write zeroes read no split ...passed 00:12:36.460 Test: blockdev write zeroes read split ...passed 00:12:36.460 Test: blockdev write zeroes read split partial ...passed 00:12:36.460 Test: blockdev reset ...passed 00:12:36.460 Test: blockdev write read 8 blocks ...passed 00:12:36.460 Test: blockdev write read size > 128k ...passed 00:12:36.460 Test: blockdev write read invalid size ...passed 00:12:36.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.460 Test: blockdev write read max offset ...passed 00:12:36.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.460 Test: blockdev writev readv 8 blocks ...passed 00:12:36.460 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.460 Test: blockdev writev readv block ...passed 00:12:36.460 Test: blockdev writev readv size > 128k ...passed 00:12:36.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.460 Test: blockdev comparev and writev ...passed 00:12:36.460 Test: blockdev nvme passthru rw ...passed 00:12:36.460 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.460 Test: blockdev nvme admin passthru ...passed 00:12:36.460 Test: blockdev copy ...passed 00:12:36.460 Suite: bdevio tests on: Malloc2p3 00:12:36.460 Test: blockdev write read block ...passed 00:12:36.460 Test: blockdev write zeroes read block ...passed 00:12:36.460 Test: blockdev write zeroes read no split ...passed 00:12:36.460 Test: blockdev write zeroes read split ...passed 00:12:36.460 Test: blockdev write zeroes read split partial ...passed 00:12:36.460 Test: blockdev reset ...passed 00:12:36.460 Test: blockdev write read 8 blocks ...passed 00:12:36.460 Test: blockdev write read size > 128k ...passed 00:12:36.460 Test: blockdev write read invalid size ...passed 00:12:36.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.460 Test: blockdev write read max offset ...passed 00:12:36.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.460 Test: blockdev writev readv 8 blocks ...passed 00:12:36.460 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.460 Test: blockdev writev readv block ...passed 00:12:36.460 Test: blockdev writev readv size > 128k ...passed 00:12:36.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.460 Test: blockdev comparev and writev ...passed 00:12:36.460 Test: blockdev nvme passthru rw ...passed 00:12:36.461 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.461 Test: blockdev nvme admin passthru ...passed 00:12:36.461 Test: blockdev copy ...passed 00:12:36.461 Suite: bdevio tests on: Malloc2p2 00:12:36.461 Test: blockdev write read block ...passed 00:12:36.461 Test: blockdev write zeroes read block ...passed 00:12:36.461 Test: blockdev write zeroes read no split ...passed 00:12:36.461 Test: blockdev write zeroes read split ...passed 00:12:36.461 Test: blockdev write zeroes read split partial ...passed 00:12:36.461 Test: blockdev reset ...passed 00:12:36.461 Test: blockdev write read 8 blocks ...passed 00:12:36.461 Test: blockdev write read size > 128k ...passed 00:12:36.461 Test: blockdev write read invalid size ...passed 00:12:36.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.461 Test: blockdev write read max offset ...passed 00:12:36.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.461 Test: blockdev writev readv 8 blocks ...passed 00:12:36.461 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.461 Test: blockdev writev readv block ...passed 00:12:36.461 Test: blockdev writev readv size > 128k ...passed 00:12:36.461 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.461 Test: blockdev comparev and writev ...passed 00:12:36.461 Test: blockdev nvme passthru rw ...passed 00:12:36.461 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.461 Test: blockdev nvme admin passthru ...passed 00:12:36.461 Test: blockdev copy ...passed 00:12:36.461 Suite: bdevio tests on: Malloc2p1 00:12:36.461 Test: blockdev write read block ...passed 00:12:36.461 Test: blockdev write zeroes read block ...passed 00:12:36.461 Test: blockdev write zeroes read no split ...passed 00:12:36.461 Test: blockdev write zeroes read split ...passed 00:12:36.720 Test: blockdev write zeroes read split partial ...passed 00:12:36.720 Test: blockdev reset ...passed 00:12:36.720 Test: blockdev write read 8 blocks ...passed 00:12:36.720 Test: blockdev write read size > 128k ...passed 00:12:36.720 Test: blockdev write read invalid size ...passed 00:12:36.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.720 Test: blockdev write read max offset ...passed 00:12:36.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.720 Test: blockdev writev readv 8 blocks ...passed 00:12:36.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.720 Test: blockdev writev readv block ...passed 00:12:36.720 Test: blockdev writev readv size > 128k ...passed 00:12:36.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.720 Test: blockdev comparev and writev ...passed 00:12:36.720 Test: blockdev nvme passthru rw ...passed 00:12:36.720 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.720 Test: blockdev nvme admin passthru ...passed 00:12:36.720 Test: blockdev copy ...passed 00:12:36.720 Suite: bdevio tests on: Malloc2p0 00:12:36.720 Test: blockdev write read block ...passed 00:12:36.720 Test: blockdev write zeroes read block ...passed 00:12:36.720 Test: blockdev write zeroes read no split ...passed 00:12:36.720 Test: blockdev write zeroes read split ...passed 00:12:36.720 Test: blockdev write zeroes read split partial ...passed 00:12:36.720 Test: blockdev reset ...passed 00:12:36.720 Test: blockdev write read 8 blocks ...passed 00:12:36.720 Test: blockdev write read size > 128k ...passed 00:12:36.720 Test: blockdev write read invalid size ...passed 00:12:36.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.720 Test: blockdev write read max offset ...passed 00:12:36.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.720 Test: blockdev writev readv 8 blocks ...passed 00:12:36.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.720 Test: blockdev writev readv block ...passed 00:12:36.720 Test: blockdev writev readv size > 128k ...passed 00:12:36.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.720 Test: blockdev comparev and writev ...passed 00:12:36.720 Test: blockdev nvme passthru rw ...passed 00:12:36.720 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.720 Test: blockdev nvme admin passthru ...passed 00:12:36.720 Test: blockdev copy ...passed 00:12:36.720 Suite: bdevio tests on: Malloc1p1 00:12:36.720 Test: blockdev write read block ...passed 00:12:36.720 Test: blockdev write zeroes read block ...passed 00:12:36.720 Test: blockdev write zeroes read no split ...passed 00:12:36.720 Test: blockdev write zeroes read split ...passed 00:12:36.720 Test: blockdev write zeroes read split partial ...passed 00:12:36.720 Test: blockdev reset ...passed 00:12:36.720 Test: blockdev write read 8 blocks ...passed 00:12:36.720 Test: blockdev write read size > 128k ...passed 00:12:36.720 Test: blockdev write read invalid size ...passed 00:12:36.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.720 Test: blockdev write read max offset ...passed 00:12:36.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.720 Test: blockdev writev readv 8 blocks ...passed 00:12:36.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.720 Test: blockdev writev readv block ...passed 00:12:36.720 Test: blockdev writev readv size > 128k ...passed 00:12:36.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.720 Test: blockdev comparev and writev ...passed 00:12:36.720 Test: blockdev nvme passthru rw ...passed 00:12:36.720 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.720 Test: blockdev nvme admin passthru ...passed 00:12:36.720 Test: blockdev copy ...passed 00:12:36.720 Suite: bdevio tests on: Malloc1p0 00:12:36.720 Test: blockdev write read block ...passed 00:12:36.720 Test: blockdev write zeroes read block ...passed 00:12:36.720 Test: blockdev write zeroes read no split ...passed 00:12:36.720 Test: blockdev write zeroes read split ...passed 00:12:36.720 Test: blockdev write zeroes read split partial ...passed 00:12:36.720 Test: blockdev reset ...passed 00:12:36.720 Test: blockdev write read 8 blocks ...passed 00:12:36.720 Test: blockdev write read size > 128k ...passed 00:12:36.720 Test: blockdev write read invalid size ...passed 00:12:36.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.720 Test: blockdev write read max offset ...passed 00:12:36.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.720 Test: blockdev writev readv 8 blocks ...passed 00:12:36.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.720 Test: blockdev writev readv block ...passed 00:12:36.720 Test: blockdev writev readv size > 128k ...passed 00:12:36.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.720 Test: blockdev comparev and writev ...passed 00:12:36.720 Test: blockdev nvme passthru rw ...passed 00:12:36.720 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.720 Test: blockdev nvme admin passthru ...passed 00:12:36.720 Test: blockdev copy ...passed 00:12:36.720 Suite: bdevio tests on: Malloc0 00:12:36.720 Test: blockdev write read block ...passed 00:12:36.720 Test: blockdev write zeroes read block ...passed 00:12:36.720 Test: blockdev write zeroes read no split ...passed 00:12:36.720 Test: blockdev write zeroes read split ...passed 00:12:36.720 Test: blockdev write zeroes read split partial ...passed 00:12:36.720 Test: blockdev reset ...passed 00:12:36.720 Test: blockdev write read 8 blocks ...passed 00:12:36.720 Test: blockdev write read size > 128k ...passed 00:12:36.720 Test: blockdev write read invalid size ...passed 00:12:36.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.720 Test: blockdev write read max offset ...passed 00:12:36.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.720 Test: blockdev writev readv 8 blocks ...passed 00:12:36.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.720 Test: blockdev writev readv block ...passed 00:12:36.720 Test: blockdev writev readv size > 128k ...passed 00:12:36.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.720 Test: blockdev comparev and writev ...passed 00:12:36.720 Test: blockdev nvme passthru rw ...passed 00:12:36.720 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.720 Test: blockdev nvme admin passthru ...passed 00:12:36.720 Test: blockdev copy ...passed 00:12:36.720 00:12:36.720 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.720 suites 16 16 n/a 0 0 00:12:36.720 tests 368 368 368 0 0 00:12:36.720 asserts 2224 2224 2224 0 n/a 00:12:36.720 00:12:36.720 Elapsed time = 2.694 seconds 00:12:36.979 0 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 116166 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 116166 ']' 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 116166 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116166 00:12:36.979 killing process with pid 116166 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116166' 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 116166 00:12:36.979 13:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 116166 00:12:38.882 13:55:27 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:38.882 00:12:38.882 real 0m4.338s 00:12:38.882 user 0m10.970s 00:12:38.882 sys 0m0.553s 00:12:38.882 13:55:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.882 ************************************ 00:12:38.882 13:55:27 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:38.882 END TEST bdev_bounds 00:12:38.882 ************************************ 00:12:38.882 13:55:27 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:38.882 13:55:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:38.882 13:55:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.882 13:55:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:38.882 ************************************ 00:12:38.882 START TEST bdev_nbd 00:12:38.882 ************************************ 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:38.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:38.882 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=116257 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 116257 /var/tmp/spdk-nbd.sock 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 116257 ']' 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.883 13:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:38.883 [2024-07-25 13:55:27.747169] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:12:38.883 [2024-07-25 13:55:27.747716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.883 [2024-07-25 13:55:27.922604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.141 [2024-07-25 13:55:28.128431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.707 [2024-07-25 13:55:28.514617] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:39.707 [2024-07-25 13:55:28.514983] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:39.707 [2024-07-25 13:55:28.522549] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:39.707 [2024-07-25 13:55:28.522753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:39.707 [2024-07-25 13:55:28.530579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:39.707 [2024-07-25 13:55:28.530810] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:39.707 [2024-07-25 13:55:28.530967] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:39.707 [2024-07-25 13:55:28.730556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:39.707 [2024-07-25 13:55:28.730854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.707 [2024-07-25 13:55:28.731054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:39.707 [2024-07-25 13:55:28.731144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.707 [2024-07-25 13:55:28.733773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.707 [2024-07-25 13:55:28.733992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.273 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:40.531 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.531 1+0 records in 00:12:40.531 1+0 records out 00:12:40.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407389 s, 10.1 MB/s 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.532 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:40.790 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.791 1+0 records in 00:12:40.791 1+0 records out 00:12:40.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736346 s, 5.6 MB/s 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.791 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:41.050 13:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.050 1+0 records in 00:12:41.050 1+0 records out 00:12:41.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601265 s, 6.8 MB/s 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.050 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.309 1+0 records in 00:12:41.309 1+0 records out 00:12:41.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636258 s, 6.4 MB/s 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.309 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.877 1+0 records in 00:12:41.877 1+0 records out 00:12:41.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479503 s, 8.5 MB/s 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:41.877 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.135 1+0 records in 00:12:42.135 1+0 records out 00:12:42.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466192 s, 8.8 MB/s 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.135 13:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.393 1+0 records in 00:12:42.393 1+0 records out 00:12:42.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443457 s, 9.2 MB/s 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.393 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.650 1+0 records in 00:12:42.650 1+0 records out 00:12:42.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694875 s, 5.9 MB/s 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:42.650 13:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.215 1+0 records in 00:12:43.215 1+0 records out 00:12:43.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691876 s, 5.9 MB/s 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.215 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.473 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.473 1+0 records in 00:12:43.473 1+0 records out 00:12:43.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744227 s, 5.5 MB/s 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.474 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.732 1+0 records in 00:12:43.732 1+0 records out 00:12:43.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772763 s, 5.3 MB/s 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:43.732 13:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.296 1+0 records in 00:12:44.296 1+0 records out 00:12:44.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784335 s, 5.2 MB/s 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.296 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.559 1+0 records in 00:12:44.559 1+0 records out 00:12:44.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659176 s, 6.2 MB/s 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.559 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.817 1+0 records in 00:12:44.817 1+0 records out 00:12:44.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610326 s, 6.7 MB/s 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:44.817 13:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.384 1+0 records in 00:12:45.384 1+0 records out 00:12:45.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722286 s, 5.7 MB/s 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.384 1+0 records in 00:12:45.384 1+0 records out 00:12:45.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138999 s, 2.9 MB/s 00:12:45.384 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:45.643 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.901 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd0", 00:12:45.901 "bdev_name": "Malloc0" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd1", 00:12:45.901 "bdev_name": "Malloc1p0" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd2", 00:12:45.901 "bdev_name": "Malloc1p1" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd3", 00:12:45.901 "bdev_name": "Malloc2p0" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd4", 00:12:45.901 "bdev_name": "Malloc2p1" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd5", 00:12:45.901 "bdev_name": "Malloc2p2" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd6", 00:12:45.901 "bdev_name": "Malloc2p3" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd7", 00:12:45.901 "bdev_name": "Malloc2p4" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd8", 00:12:45.901 "bdev_name": "Malloc2p5" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd9", 00:12:45.901 "bdev_name": "Malloc2p6" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd10", 00:12:45.901 "bdev_name": "Malloc2p7" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd11", 00:12:45.901 "bdev_name": "TestPT" 00:12:45.901 }, 00:12:45.901 { 00:12:45.901 "nbd_device": "/dev/nbd12", 00:12:45.902 "bdev_name": "raid0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd13", 00:12:45.902 "bdev_name": "concat0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd14", 00:12:45.902 "bdev_name": "raid1" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd15", 00:12:45.902 "bdev_name": "AIO0" 00:12:45.902 } 00:12:45.902 ]' 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd0", 00:12:45.902 "bdev_name": "Malloc0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd1", 00:12:45.902 "bdev_name": "Malloc1p0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd2", 00:12:45.902 "bdev_name": "Malloc1p1" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd3", 00:12:45.902 "bdev_name": "Malloc2p0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd4", 00:12:45.902 "bdev_name": "Malloc2p1" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd5", 00:12:45.902 "bdev_name": "Malloc2p2" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd6", 00:12:45.902 "bdev_name": "Malloc2p3" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd7", 00:12:45.902 "bdev_name": "Malloc2p4" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd8", 00:12:45.902 "bdev_name": "Malloc2p5" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd9", 00:12:45.902 "bdev_name": "Malloc2p6" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd10", 00:12:45.902 "bdev_name": "Malloc2p7" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd11", 00:12:45.902 "bdev_name": "TestPT" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd12", 00:12:45.902 "bdev_name": "raid0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd13", 00:12:45.902 "bdev_name": "concat0" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd14", 00:12:45.902 "bdev_name": "raid1" 00:12:45.902 }, 00:12:45.902 { 00:12:45.902 "nbd_device": "/dev/nbd15", 00:12:45.902 "bdev_name": "AIO0" 00:12:45.902 } 00:12:45.902 ]' 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.902 13:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.160 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.727 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.985 13:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.269 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:47.526 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:47.526 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:47.526 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:47.526 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.527 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.784 13:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:48.042 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:48.042 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:48.042 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.043 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:48.608 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.867 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.125 13:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.383 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.641 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.207 13:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.466 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.724 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.981 13:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.239 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:51.239 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:51.239 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.498 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:51.757 /dev/nbd0 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.757 1+0 records in 00:12:51.757 1+0 records out 00:12:51.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329973 s, 12.4 MB/s 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:51.757 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:52.017 /dev/nbd1 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.017 1+0 records in 00:12:52.017 1+0 records out 00:12:52.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501654 s, 8.2 MB/s 00:12:52.017 13:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.017 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:52.275 /dev/nbd10 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.275 1+0 records in 00:12:52.275 1+0 records out 00:12:52.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416639 s, 9.8 MB/s 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.275 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:52.843 /dev/nbd11 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.843 1+0 records in 00:12:52.843 1+0 records out 00:12:52.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486246 s, 8.4 MB/s 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:52.843 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:53.102 /dev/nbd12 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.102 1+0 records in 00:12:53.102 1+0 records out 00:12:53.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422992 s, 9.7 MB/s 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.102 13:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:53.361 /dev/nbd13 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.361 1+0 records in 00:12:53.361 1+0 records out 00:12:53.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430034 s, 9.5 MB/s 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.361 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:53.619 /dev/nbd14 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.619 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.619 1+0 records in 00:12:53.619 1+0 records out 00:12:53.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513493 s, 8.0 MB/s 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.620 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:53.880 /dev/nbd15 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.880 1+0 records in 00:12:53.880 1+0 records out 00:12:53.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626521 s, 6.5 MB/s 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:53.880 13:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:54.142 /dev/nbd2 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.400 1+0 records in 00:12:54.400 1+0 records out 00:12:54.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562914 s, 7.3 MB/s 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.400 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:54.657 /dev/nbd3 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.657 1+0 records in 00:12:54.657 1+0 records out 00:12:54.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634493 s, 6.5 MB/s 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.657 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:54.914 /dev/nbd4 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.914 1+0 records in 00:12:54.914 1+0 records out 00:12:54.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528249 s, 7.8 MB/s 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:54.914 13:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:55.225 /dev/nbd5 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.225 1+0 records in 00:12:55.225 1+0 records out 00:12:55.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636528 s, 6.4 MB/s 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.225 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:55.492 /dev/nbd6 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.492 1+0 records in 00:12:55.492 1+0 records out 00:12:55.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876344 s, 4.7 MB/s 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:55.492 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:55.751 /dev/nbd7 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.751 1+0 records in 00:12:55.751 1+0 records out 00:12:55.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929858 s, 4.4 MB/s 00:12:55.751 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.010 13:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:56.010 /dev/nbd8 00:12:56.268 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:56.268 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.269 1+0 records in 00:12:56.269 1+0 records out 00:12:56.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107398 s, 3.8 MB/s 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.269 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:56.528 /dev/nbd9 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.528 1+0 records in 00:12:56.528 1+0 records out 00:12:56.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110638 s, 3.7 MB/s 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.528 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.787 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd0", 00:12:56.787 "bdev_name": "Malloc0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd1", 00:12:56.787 "bdev_name": "Malloc1p0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd10", 00:12:56.787 "bdev_name": "Malloc1p1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd11", 00:12:56.787 "bdev_name": "Malloc2p0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd12", 00:12:56.787 "bdev_name": "Malloc2p1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd13", 00:12:56.787 "bdev_name": "Malloc2p2" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd14", 00:12:56.787 "bdev_name": "Malloc2p3" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd15", 00:12:56.787 "bdev_name": "Malloc2p4" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd2", 00:12:56.787 "bdev_name": "Malloc2p5" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd3", 00:12:56.787 "bdev_name": "Malloc2p6" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd4", 00:12:56.787 "bdev_name": "Malloc2p7" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd5", 00:12:56.787 "bdev_name": "TestPT" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd6", 00:12:56.787 "bdev_name": "raid0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd7", 00:12:56.787 "bdev_name": "concat0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd8", 00:12:56.787 "bdev_name": "raid1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd9", 00:12:56.787 "bdev_name": "AIO0" 00:12:56.787 } 00:12:56.787 ]' 00:12:56.787 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd0", 00:12:56.787 "bdev_name": "Malloc0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd1", 00:12:56.787 "bdev_name": "Malloc1p0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd10", 00:12:56.787 "bdev_name": "Malloc1p1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd11", 00:12:56.787 "bdev_name": "Malloc2p0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd12", 00:12:56.787 "bdev_name": "Malloc2p1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd13", 00:12:56.787 "bdev_name": "Malloc2p2" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd14", 00:12:56.787 "bdev_name": "Malloc2p3" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd15", 00:12:56.787 "bdev_name": "Malloc2p4" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd2", 00:12:56.787 "bdev_name": "Malloc2p5" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd3", 00:12:56.787 "bdev_name": "Malloc2p6" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd4", 00:12:56.787 "bdev_name": "Malloc2p7" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd5", 00:12:56.787 "bdev_name": "TestPT" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd6", 00:12:56.787 "bdev_name": "raid0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd7", 00:12:56.787 "bdev_name": "concat0" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd8", 00:12:56.787 "bdev_name": "raid1" 00:12:56.787 }, 00:12:56.787 { 00:12:56.787 "nbd_device": "/dev/nbd9", 00:12:56.787 "bdev_name": "AIO0" 00:12:56.787 } 00:12:56.787 ]' 00:12:56.787 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.787 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:56.787 /dev/nbd1 00:12:56.787 /dev/nbd10 00:12:56.787 /dev/nbd11 00:12:56.787 /dev/nbd12 00:12:56.787 /dev/nbd13 00:12:56.787 /dev/nbd14 00:12:56.787 /dev/nbd15 00:12:56.787 /dev/nbd2 00:12:56.787 /dev/nbd3 00:12:56.787 /dev/nbd4 00:12:56.787 /dev/nbd5 00:12:56.788 /dev/nbd6 00:12:56.788 /dev/nbd7 00:12:56.788 /dev/nbd8 00:12:56.788 /dev/nbd9' 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:56.788 /dev/nbd1 00:12:56.788 /dev/nbd10 00:12:56.788 /dev/nbd11 00:12:56.788 /dev/nbd12 00:12:56.788 /dev/nbd13 00:12:56.788 /dev/nbd14 00:12:56.788 /dev/nbd15 00:12:56.788 /dev/nbd2 00:12:56.788 /dev/nbd3 00:12:56.788 /dev/nbd4 00:12:56.788 /dev/nbd5 00:12:56.788 /dev/nbd6 00:12:56.788 /dev/nbd7 00:12:56.788 /dev/nbd8 00:12:56.788 /dev/nbd9' 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:56.788 256+0 records in 00:12:56.788 256+0 records out 00:12:56.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114597 s, 91.5 MB/s 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.788 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:57.046 256+0 records in 00:12:57.046 256+0 records out 00:12:57.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158946 s, 6.6 MB/s 00:12:57.046 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.046 13:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:57.046 256+0 records in 00:12:57.046 256+0 records out 00:12:57.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147196 s, 7.1 MB/s 00:12:57.046 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.046 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:57.304 256+0 records in 00:12:57.304 256+0 records out 00:12:57.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156961 s, 6.7 MB/s 00:12:57.304 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.304 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:57.563 256+0 records in 00:12:57.563 256+0 records out 00:12:57.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158499 s, 6.6 MB/s 00:12:57.563 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.563 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:57.563 256+0 records in 00:12:57.563 256+0 records out 00:12:57.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149421 s, 7.0 MB/s 00:12:57.563 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.563 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:57.821 256+0 records in 00:12:57.821 256+0 records out 00:12:57.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159428 s, 6.6 MB/s 00:12:57.821 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.821 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:57.821 256+0 records in 00:12:57.821 256+0 records out 00:12:57.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152507 s, 6.9 MB/s 00:12:57.821 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:57.821 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:58.080 256+0 records in 00:12:58.080 256+0 records out 00:12:58.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15834 s, 6.6 MB/s 00:12:58.080 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.080 13:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:58.338 256+0 records in 00:12:58.338 256+0 records out 00:12:58.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147496 s, 7.1 MB/s 00:12:58.338 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.338 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:58.338 256+0 records in 00:12:58.338 256+0 records out 00:12:58.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139589 s, 7.5 MB/s 00:12:58.338 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.338 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:58.597 256+0 records in 00:12:58.597 256+0 records out 00:12:58.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157634 s, 6.7 MB/s 00:12:58.597 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.597 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:58.855 256+0 records in 00:12:58.855 256+0 records out 00:12:58.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176809 s, 5.9 MB/s 00:12:58.855 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.855 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:58.855 256+0 records in 00:12:58.855 256+0 records out 00:12:58.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158833 s, 6.6 MB/s 00:12:58.855 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:58.855 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:59.114 256+0 records in 00:12:59.114 256+0 records out 00:12:59.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156744 s, 6.7 MB/s 00:12:59.114 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.114 13:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:59.114 256+0 records in 00:12:59.114 256+0 records out 00:12:59.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164037 s, 6.4 MB/s 00:12:59.114 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:59.114 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:59.373 256+0 records in 00:12:59.373 256+0 records out 00:12:59.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.217004 s, 4.8 MB/s 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.373 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.632 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.891 13:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.149 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.408 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:00.976 13:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.976 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.543 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.110 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:02.111 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.111 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.111 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.111 13:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.369 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.627 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.886 13:55:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.192 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.451 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.710 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.968 13:55:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.535 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:04.793 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:04.794 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:05.053 malloc_lvol_verify 00:13:05.053 13:55:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:05.312 7c81f54d-97d7-4890-bc16-cebbc45b296d 00:13:05.312 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:05.570 25d7e101-fbc4-473c-bc93-36ae740ff71a 00:13:05.570 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:06.135 /dev/nbd0 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:06.135 mke2fs 1.46.5 (30-Dec-2021) 00:13:06.135 00:13:06.135 Filesystem too small for a journal 00:13:06.135 Discarding device blocks: 0/1024 done 00:13:06.135 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:06.135 00:13:06.135 Allocating group tables: 0/1 done 00:13:06.135 Writing inode tables: 0/1 done 00:13:06.135 Writing superblocks and filesystem accounting information: 0/1 done 00:13:06.135 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.135 13:55:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 116257 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 116257 ']' 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 116257 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116257 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.393 killing process with pid 116257 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116257' 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 116257 00:13:06.393 13:55:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 116257 00:13:08.377 13:55:57 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:08.377 00:13:08.377 real 0m29.726s 00:13:08.377 user 0m41.349s 00:13:08.377 sys 0m10.638s 00:13:08.377 13:55:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.377 13:55:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:08.377 ************************************ 00:13:08.377 END TEST bdev_nbd 00:13:08.377 ************************************ 00:13:08.377 13:55:57 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:08.377 13:55:57 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:13:08.377 13:55:57 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:13:08.377 13:55:57 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:08.377 13:55:57 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:08.377 13:55:57 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.377 13:55:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:08.635 ************************************ 00:13:08.635 START TEST bdev_fio 00:13:08.635 ************************************ 00:13:08.635 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:08.635 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:08.636 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.636 13:55:57 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 ************************************ 00:13:08.636 START TEST bdev_fio_rw_verify 00:13:08.636 ************************************ 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:08.636 13:55:57 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:08.895 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:08.895 fio-3.35 00:13:08.895 Starting 16 threads 00:13:21.095 00:13:21.095 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=117474: Thu Jul 25 13:56:09 2024 00:13:21.095 read: IOPS=70.7k, BW=276MiB/s (289MB/s)(2761MiB/10003msec) 00:13:21.095 slat (usec): min=2, max=39939, avg=39.14, stdev=467.82 00:13:21.095 clat (usec): min=8, max=40236, avg=314.45, stdev=1368.65 00:13:21.095 lat (usec): min=26, max=40257, avg=353.58, stdev=1445.99 00:13:21.095 clat percentiles (usec): 00:13:21.095 | 50.000th=[ 186], 99.000th=[ 668], 99.900th=[16450], 99.990th=[32113], 00:13:21.095 | 99.999th=[33424] 00:13:21.095 write: IOPS=112k, BW=437MiB/s (458MB/s)(4322MiB/9890msec); 0 zone resets 00:13:21.095 slat (usec): min=5, max=60541, avg=72.33, stdev=712.71 00:13:21.095 clat (usec): min=9, max=116212, avg=423.25, stdev=1704.45 00:13:21.095 lat (usec): min=41, max=116250, avg=495.58, stdev=1846.92 00:13:21.095 clat percentiles (usec): 00:13:21.095 | 50.000th=[ 239], 99.000th=[ 8455], 99.900th=[22414], 99.990th=[35390], 00:13:21.095 | 99.999th=[60556] 00:13:21.095 bw ( KiB/s): min=266800, max=682424, per=97.85%, avg=437849.26, stdev=7351.65, samples=304 00:13:21.095 iops : min=66700, max=170606, avg=109462.11, stdev=1837.92, samples=304 00:13:21.095 lat (usec) : 10=0.01%, 20=0.01%, 50=0.45%, 100=9.20%, 250=52.73% 00:13:21.095 lat (usec) : 500=34.61%, 750=1.56%, 1000=0.17% 00:13:21.095 lat (msec) : 2=0.14%, 4=0.08%, 10=0.21%, 20=0.73%, 50=0.12% 00:13:21.095 lat (msec) : 100=0.01%, 250=0.01% 00:13:21.095 cpu : usr=56.00%, sys=2.02%, ctx=227757, majf=3, minf=76706 00:13:21.095 IO depths : 1=11.2%, 2=23.6%, 4=52.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.095 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.095 issued rwts: total=706942,1106403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:21.095 00:13:21.095 Run status group 0 (all jobs): 00:13:21.095 READ: bw=276MiB/s (289MB/s), 276MiB/s-276MiB/s (289MB/s-289MB/s), io=2761MiB (2896MB), run=10003-10003msec 00:13:21.095 WRITE: bw=437MiB/s (458MB/s), 437MiB/s-437MiB/s (458MB/s-458MB/s), io=4322MiB (4532MB), run=9890-9890msec 00:13:23.030 ----------------------------------------------------- 00:13:23.030 Suppressions used: 00:13:23.030 count bytes template 00:13:23.030 16 140 /usr/src/fio/parse.c 00:13:23.030 10408 999168 /usr/src/fio/iolog.c 00:13:23.030 1 904 libcrypto.so 00:13:23.030 ----------------------------------------------------- 00:13:23.030 00:13:23.030 00:13:23.030 real 0m14.346s 00:13:23.030 user 1m35.283s 00:13:23.030 sys 0m4.251s 00:13:23.030 13:56:11 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.030 ************************************ 00:13:23.030 END TEST bdev_fio_rw_verify 00:13:23.030 ************************************ 00:13:23.030 13:56:11 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:23.030 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:23.032 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "93b690c7-7062-477c-a129-c7917ee07b27"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93b690c7-7062-477c-a129-c7917ee07b27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d36ececd-d3d9-5f5c-acb7-696e13d4b03a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d36ececd-d3d9-5f5c-acb7-696e13d4b03a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "99bf90de-b871-5804-88d3-75c35b33fd5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99bf90de-b871-5804-88d3-75c35b33fd5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d9fec1dc-541a-547a-85e9-75790b35b511"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9fec1dc-541a-547a-85e9-75790b35b511",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "67225ad3-f461-565f-b4e8-fa5701d4aa8e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67225ad3-f461-565f-b4e8-fa5701d4aa8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ef4688f2-d685-560d-99b3-5fa29a7cc185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef4688f2-d685-560d-99b3-5fa29a7cc185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "adad4d29-3ab2-5102-85ff-3d08cc704918"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "adad4d29-3ab2-5102-85ff-3d08cc704918",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "009e9488-d3d6-567f-9d49-31e1957cf8fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "009e9488-d3d6-567f-9d49-31e1957cf8fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8b0fffaa-41e6-5049-8ebc-f0cb01715734"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b0fffaa-41e6-5049-8ebc-f0cb01715734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5eec75fc-341f-581c-8223-d24bcd6b5797"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eec75fc-341f-581c-8223-d24bcd6b5797",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3c5f88ba-5bde-5c21-9692-08512aaac0e9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3c5f88ba-5bde-5c21-9692-08512aaac0e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b9484430-cff2-4918-be7b-dfcced59da87"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a48c5e83-1e48-4aed-9222-95e24500ad7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc3de3b6-7e8c-495e-a109-ddf687086163",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c845f0c0-4efc-41ab-a71c-b00d87cea294"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ed4e3357-2c86-46fd-b2aa-b23b8ae290f3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a8d53ec-8a73-4e17-95b2-4a5d49f5546c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b3c782e-b7c4-4038-a536-d0eef4a5ba21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d49393c9-9609-4b56-b4cc-aa268b56f636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "48c48637-a846-48c0-81e9-cf120a63503d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "48c48637-a846-48c0-81e9-cf120a63503d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:23.032 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:13:23.032 Malloc1p0 00:13:23.032 Malloc1p1 00:13:23.032 Malloc2p0 00:13:23.032 Malloc2p1 00:13:23.032 Malloc2p2 00:13:23.032 Malloc2p3 00:13:23.032 Malloc2p4 00:13:23.032 Malloc2p5 00:13:23.032 Malloc2p6 00:13:23.032 Malloc2p7 00:13:23.032 TestPT 00:13:23.032 raid0 00:13:23.032 concat0 ]] 00:13:23.032 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:23.033 13:56:11 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "93b690c7-7062-477c-a129-c7917ee07b27"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93b690c7-7062-477c-a129-c7917ee07b27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "d36ececd-d3d9-5f5c-acb7-696e13d4b03a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d36ececd-d3d9-5f5c-acb7-696e13d4b03a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "99bf90de-b871-5804-88d3-75c35b33fd5a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "99bf90de-b871-5804-88d3-75c35b33fd5a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d9fec1dc-541a-547a-85e9-75790b35b511"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d9fec1dc-541a-547a-85e9-75790b35b511",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "67225ad3-f461-565f-b4e8-fa5701d4aa8e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "67225ad3-f461-565f-b4e8-fa5701d4aa8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ef4688f2-d685-560d-99b3-5fa29a7cc185"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ef4688f2-d685-560d-99b3-5fa29a7cc185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "adad4d29-3ab2-5102-85ff-3d08cc704918"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "adad4d29-3ab2-5102-85ff-3d08cc704918",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "071c4bd7-5bf9-50e0-a88c-4a7b713d5c6a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "009e9488-d3d6-567f-9d49-31e1957cf8fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "009e9488-d3d6-567f-9d49-31e1957cf8fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "8b0fffaa-41e6-5049-8ebc-f0cb01715734"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b0fffaa-41e6-5049-8ebc-f0cb01715734",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5eec75fc-341f-581c-8223-d24bcd6b5797"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5eec75fc-341f-581c-8223-d24bcd6b5797",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "3c5f88ba-5bde-5c21-9692-08512aaac0e9"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3c5f88ba-5bde-5c21-9692-08512aaac0e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b9484430-cff2-4918-be7b-dfcced59da87"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b9484430-cff2-4918-be7b-dfcced59da87",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a48c5e83-1e48-4aed-9222-95e24500ad7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "fc3de3b6-7e8c-495e-a109-ddf687086163",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c845f0c0-4efc-41ab-a71c-b00d87cea294"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c845f0c0-4efc-41ab-a71c-b00d87cea294",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "ed4e3357-2c86-46fd-b2aa-b23b8ae290f3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a8d53ec-8a73-4e17-95b2-4a5d49f5546c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d453a7f3-2d07-41ef-bca0-3d99a0c3ace0",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2b3c782e-b7c4-4038-a536-d0eef4a5ba21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "d49393c9-9609-4b56-b4cc-aa268b56f636",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "48c48637-a846-48c0-81e9-cf120a63503d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "48c48637-a846-48c0-81e9-cf120a63503d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:13:23.033 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.034 13:56:12 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:23.034 ************************************ 00:13:23.034 START TEST bdev_fio_trim 00:13:23.034 ************************************ 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.034 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:23.293 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:23.293 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:23.293 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:13:23.293 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:23.293 13:56:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.293 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.293 fio-3.35 00:13:23.293 Starting 14 threads 00:13:35.513 00:13:35.513 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=117695: Thu Jul 25 13:56:23 2024 00:13:35.513 write: IOPS=165k, BW=644MiB/s (676MB/s)(6453MiB/10016msec); 0 zone resets 00:13:35.513 slat (usec): min=2, max=28042, avg=30.12, stdev=359.38 00:13:35.513 clat (usec): min=23, max=35044, avg=218.82, stdev=1004.80 00:13:35.513 lat (usec): min=40, max=35066, avg=248.94, stdev=1066.59 00:13:35.513 clat percentiles (usec): 00:13:35.513 | 50.000th=[ 143], 99.000th=[ 494], 99.900th=[16188], 99.990th=[20317], 00:13:35.513 | 99.999th=[28181] 00:13:35.513 bw ( KiB/s): min=485328, max=901088, per=100.00%, avg=660030.24, stdev=10158.76, samples=268 00:13:35.513 iops : min=121332, max=225270, avg=165007.47, stdev=2539.67, samples=268 00:13:35.513 trim: IOPS=165k, BW=644MiB/s (676MB/s)(6453MiB/10016msec); 0 zone resets 00:13:35.513 slat (usec): min=5, max=29912, avg=20.57, stdev=305.19 00:13:35.513 clat (usec): min=4, max=35067, avg=227.43, stdev=1004.40 00:13:35.513 lat (usec): min=14, max=35079, avg=248.00, stdev=1049.65 00:13:35.513 clat percentiles (usec): 00:13:35.513 | 50.000th=[ 159], 99.000th=[ 326], 99.900th=[16188], 99.990th=[20317], 00:13:35.513 | 99.999th=[28181] 00:13:35.513 bw ( KiB/s): min=485328, max=901080, per=100.00%, avg=660030.24, stdev=10158.91, samples=268 00:13:35.513 iops : min=121332, max=225270, avg=165007.47, stdev=2539.72, samples=268 00:13:35.513 lat (usec) : 10=0.14%, 20=0.33%, 50=1.03%, 100=15.29%, 250=77.93% 00:13:35.513 lat (usec) : 500=4.59%, 750=0.22%, 1000=0.01% 00:13:35.513 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.41%, 50=0.01% 00:13:35.513 cpu : usr=68.98%, sys=0.48%, ctx=170321, majf=0, minf=1038 00:13:35.513 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.513 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.513 issued rwts: total=0,1652002,1652003,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.513 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.513 00:13:35.513 Run status group 0 (all jobs): 00:13:35.513 WRITE: bw=644MiB/s (676MB/s), 644MiB/s-644MiB/s (676MB/s-676MB/s), io=6453MiB (6767MB), run=10016-10016msec 00:13:35.513 TRIM: bw=644MiB/s (676MB/s), 644MiB/s-644MiB/s (676MB/s-676MB/s), io=6453MiB (6767MB), run=10016-10016msec 00:13:36.888 ----------------------------------------------------- 00:13:36.888 Suppressions used: 00:13:36.888 count bytes template 00:13:36.888 14 129 /usr/src/fio/parse.c 00:13:36.888 1 904 libcrypto.so 00:13:36.888 ----------------------------------------------------- 00:13:36.888 00:13:36.888 00:13:36.888 real 0m13.863s 00:13:36.888 user 1m41.718s 00:13:36.888 sys 0m1.641s 00:13:36.888 13:56:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.888 13:56:25 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:36.888 ************************************ 00:13:36.888 END TEST bdev_fio_trim 00:13:36.888 ************************************ 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:37.146 /home/vagrant/spdk_repo/spdk 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:13:37.146 00:13:37.146 real 0m28.548s 00:13:37.146 user 3m17.225s 00:13:37.146 sys 0m5.984s 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.146 13:56:25 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:37.146 ************************************ 00:13:37.146 END TEST bdev_fio 00:13:37.146 ************************************ 00:13:37.146 13:56:25 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:37.146 13:56:26 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:37.146 13:56:26 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:37.146 13:56:26 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.146 13:56:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:37.146 ************************************ 00:13:37.146 START TEST bdev_verify 00:13:37.146 ************************************ 00:13:37.146 13:56:26 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:37.146 [2024-07-25 13:56:26.087406] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:37.146 [2024-07-25 13:56:26.087670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117884 ] 00:13:37.405 [2024-07-25 13:56:26.258992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.663 [2024-07-25 13:56:26.521747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.663 [2024-07-25 13:56:26.521744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.920 [2024-07-25 13:56:26.908853] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.920 [2024-07-25 13:56:26.909188] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.920 [2024-07-25 13:56:26.916796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.920 [2024-07-25 13:56:26.916987] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.920 [2024-07-25 13:56:26.924831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:37.920 [2024-07-25 13:56:26.925048] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:37.920 [2024-07-25 13:56:26.925194] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:38.179 [2024-07-25 13:56:27.124699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:38.179 [2024-07-25 13:56:27.125261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.179 [2024-07-25 13:56:27.125466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:38.179 [2024-07-25 13:56:27.125644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.179 [2024-07-25 13:56:27.128634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.179 [2024-07-25 13:56:27.128806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:38.746 Running I/O for 5 seconds... 00:13:44.009 00:13:44.009 Latency(us) 00:13:44.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.009 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x0 length 0x1000 00:13:44.009 Malloc0 : 5.21 1276.77 4.99 0.00 0.00 100114.59 636.74 318385.80 00:13:44.009 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x1000 length 0x1000 00:13:44.009 Malloc0 : 5.22 1275.94 4.98 0.00 0.00 100168.44 644.19 320292.31 00:13:44.009 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x0 length 0x800 00:13:44.009 Malloc1p0 : 5.22 662.66 2.59 0.00 0.00 192522.18 3291.69 176351.42 00:13:44.009 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x800 length 0x800 00:13:44.009 Malloc1p0 : 5.22 662.22 2.59 0.00 0.00 192604.44 3381.06 177304.67 00:13:44.009 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x0 length 0x800 00:13:44.009 Malloc1p1 : 5.22 662.40 2.59 0.00 0.00 192159.31 3381.06 171585.16 00:13:44.009 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x800 length 0x800 00:13:44.009 Malloc1p1 : 5.22 661.94 2.59 0.00 0.00 192254.22 3500.22 173491.67 00:13:44.009 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x0 length 0x200 00:13:44.009 Malloc2p0 : 5.22 662.12 2.59 0.00 0.00 191788.94 3425.75 167772.16 00:13:44.009 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.009 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p0 : 5.22 661.66 2.58 0.00 0.00 191877.25 3470.43 169678.66 00:13:44.010 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p1 : 5.22 661.83 2.59 0.00 0.00 191430.38 3336.38 164912.41 00:13:44.010 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p1 : 5.23 661.38 2.58 0.00 0.00 191498.49 3366.17 165865.66 00:13:44.010 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p2 : 5.22 661.55 2.58 0.00 0.00 191069.24 3157.64 161099.40 00:13:44.010 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p2 : 5.23 661.12 2.58 0.00 0.00 191132.35 3232.12 163005.91 00:13:44.010 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p3 : 5.23 661.27 2.58 0.00 0.00 190726.50 3038.49 158239.65 00:13:44.010 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p3 : 5.23 660.86 2.58 0.00 0.00 190766.41 3008.70 159192.90 00:13:44.010 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p4 : 5.23 661.00 2.58 0.00 0.00 190400.76 2904.44 155379.90 00:13:44.010 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p4 : 5.23 660.60 2.58 0.00 0.00 190430.20 2874.65 156333.15 00:13:44.010 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p5 : 5.23 660.74 2.58 0.00 0.00 190097.81 2800.17 152520.15 00:13:44.010 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p5 : 5.23 660.34 2.58 0.00 0.00 190131.53 2904.44 154426.65 00:13:44.010 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p6 : 5.23 660.48 2.58 0.00 0.00 189808.03 2651.23 150613.64 00:13:44.010 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p6 : 5.24 660.10 2.58 0.00 0.00 189820.36 2710.81 151566.89 00:13:44.010 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x200 00:13:44.010 Malloc2p7 : 5.23 660.22 2.58 0.00 0.00 189531.14 2546.97 148707.14 00:13:44.010 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x200 length 0x200 00:13:44.010 Malloc2p7 : 5.24 659.81 2.58 0.00 0.00 189531.20 2412.92 148707.14 00:13:44.010 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x1000 00:13:44.010 TestPT : 5.25 658.66 2.57 0.00 0.00 189566.29 9175.04 148707.14 00:13:44.010 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x1000 length 0x1000 00:13:44.010 TestPT : 5.25 657.98 2.57 0.00 0.00 189640.02 10843.23 150613.64 00:13:44.010 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x2000 00:13:44.010 raid0 : 5.24 659.65 2.58 0.00 0.00 188844.21 2740.60 141081.13 00:13:44.010 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x2000 length 0x2000 00:13:44.010 raid0 : 5.24 659.17 2.57 0.00 0.00 188853.49 2800.17 137268.13 00:13:44.010 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x2000 00:13:44.010 concat0 : 5.24 659.36 2.58 0.00 0.00 188565.87 2725.70 146800.64 00:13:44.010 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x2000 length 0x2000 00:13:44.010 concat0 : 5.25 658.88 2.57 0.00 0.00 188574.77 2889.54 136314.88 00:13:44.010 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x1000 00:13:44.010 raid1 : 5.24 659.08 2.57 0.00 0.00 188257.38 3276.80 152520.15 00:13:44.010 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x1000 length 0x1000 00:13:44.010 raid1 : 5.25 658.59 2.57 0.00 0.00 188260.24 2800.17 136314.88 00:13:44.010 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x0 length 0x4e2 00:13:44.010 AIO0 : 5.25 658.46 2.57 0.00 0.00 187654.98 3410.85 170631.91 00:13:44.010 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:44.010 Verification LBA range: start 0x4e2 length 0x4e2 00:13:44.010 AIO0 : 5.25 657.86 2.57 0.00 0.00 187734.50 2502.28 153473.40 00:13:44.010 =================================================================================================================== 00:13:44.010 Total : 22364.69 87.36 0.00 0.00 179938.83 636.74 320292.31 00:13:46.535 00:13:46.535 real 0m8.954s 00:13:46.535 user 0m15.698s 00:13:46.535 sys 0m0.636s 00:13:46.535 13:56:34 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.535 ************************************ 00:13:46.535 13:56:34 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:46.535 END TEST bdev_verify 00:13:46.535 ************************************ 00:13:46.535 13:56:35 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.535 13:56:35 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:46.535 13:56:35 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.535 13:56:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.535 ************************************ 00:13:46.535 START TEST bdev_verify_big_io 00:13:46.535 ************************************ 00:13:46.535 13:56:35 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:46.535 [2024-07-25 13:56:35.081880] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:46.535 [2024-07-25 13:56:35.082870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118014 ] 00:13:46.535 [2024-07-25 13:56:35.264747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.535 [2024-07-25 13:56:35.510842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.535 [2024-07-25 13:56:35.510839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.120 [2024-07-25 13:56:35.926388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.120 [2024-07-25 13:56:35.926723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.120 [2024-07-25 13:56:35.934342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.120 [2024-07-25 13:56:35.934546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.120 [2024-07-25 13:56:35.942405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.120 [2024-07-25 13:56:35.942678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:47.120 [2024-07-25 13:56:35.942829] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:47.120 [2024-07-25 13:56:36.139315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.120 [2024-07-25 13:56:36.139900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.120 [2024-07-25 13:56:36.140089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:47.120 [2024-07-25 13:56:36.140226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.120 [2024-07-25 13:56:36.143135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.120 [2024-07-25 13:56:36.143319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:47.687 [2024-07-25 13:56:36.502409] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.506162] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.510215] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.514238] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.517745] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.521878] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.525469] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.529517] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.533018] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.537068] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.540549] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.544668] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.548153] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.552224] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.556283] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.559759] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:47.687 [2024-07-25 13:56:36.643769] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:47.687 [2024-07-25 13:56:36.650888] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:47.687 Running I/O for 5 seconds... 00:13:55.798 00:13:55.798 Latency(us) 00:13:55.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.798 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x100 00:13:55.798 Malloc0 : 5.97 192.95 12.06 0.00 0.00 653146.66 744.73 1837867.75 00:13:55.798 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x100 length 0x100 00:13:55.798 Malloc0 : 5.85 196.92 12.31 0.00 0.00 638699.18 826.65 1929379.84 00:13:55.798 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x80 00:13:55.798 Malloc1p0 : 6.10 109.57 6.85 0.00 0.00 1104955.86 2710.81 2211542.11 00:13:55.798 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x80 length 0x80 00:13:55.798 Malloc1p0 : 6.52 41.74 2.61 0.00 0.00 2803601.15 1608.61 4758628.54 00:13:55.798 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x80 00:13:55.798 Malloc1p1 : 6.39 42.56 2.66 0.00 0.00 2725885.47 1623.51 4789132.57 00:13:55.798 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x80 length 0x80 00:13:55.798 Malloc1p1 : 6.52 41.73 2.61 0.00 0.00 2723973.33 1742.66 4606108.39 00:13:55.798 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x20 00:13:55.798 Malloc2p0 : 6.04 29.13 1.82 0.00 0.00 1000635.61 826.65 1601461.53 00:13:55.798 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x20 length 0x20 00:13:55.798 Malloc2p0 : 6.03 29.21 1.83 0.00 0.00 980326.35 722.39 1616713.54 00:13:55.798 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x20 00:13:55.798 Malloc2p1 : 6.04 29.13 1.82 0.00 0.00 993524.18 763.35 1578583.51 00:13:55.798 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x20 length 0x20 00:13:55.798 Malloc2p1 : 6.03 29.20 1.83 0.00 0.00 972770.07 651.64 1593835.52 00:13:55.798 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x0 length 0x20 00:13:55.798 Malloc2p2 : 6.04 29.12 1.82 0.00 0.00 985861.17 655.36 1555705.48 00:13:55.798 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.798 Verification LBA range: start 0x20 length 0x20 00:13:55.798 Malloc2p2 : 6.03 29.19 1.82 0.00 0.00 965137.82 767.07 1570957.50 00:13:55.799 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x20 00:13:55.799 Malloc2p3 : 6.05 29.11 1.82 0.00 0.00 978675.68 793.13 1532827.46 00:13:55.799 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x20 length 0x20 00:13:55.799 Malloc2p3 : 6.09 31.52 1.97 0.00 0.00 897364.24 666.53 1548079.48 00:13:55.799 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x20 00:13:55.799 Malloc2p4 : 6.05 29.11 1.82 0.00 0.00 971657.55 688.87 1509949.44 00:13:55.799 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x20 length 0x20 00:13:55.799 Malloc2p4 : 6.09 31.52 1.97 0.00 0.00 890458.17 673.98 1525201.45 00:13:55.799 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x20 00:13:55.799 Malloc2p5 : 6.05 29.10 1.82 0.00 0.00 965382.80 778.24 1494697.43 00:13:55.799 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x20 length 0x20 00:13:55.799 Malloc2p5 : 6.09 31.51 1.97 0.00 0.00 883577.92 767.07 1502323.43 00:13:55.799 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x20 00:13:55.799 Malloc2p6 : 6.05 29.09 1.82 0.00 0.00 957775.88 696.32 1471819.40 00:13:55.799 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x20 length 0x20 00:13:55.799 Malloc2p6 : 6.10 31.50 1.97 0.00 0.00 876482.55 696.32 1479445.41 00:13:55.799 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x20 00:13:55.799 Malloc2p7 : 6.05 29.09 1.82 0.00 0.00 950912.90 759.62 1456567.39 00:13:55.799 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x20 length 0x20 00:13:55.799 Malloc2p7 : 6.10 31.49 1.97 0.00 0.00 869331.13 767.07 1456567.39 00:13:55.799 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x100 00:13:55.799 TestPT : 6.39 40.67 2.54 0.00 0.00 2614904.73 98184.84 4118043.93 00:13:55.799 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x100 length 0x100 00:13:55.799 TestPT : 6.54 41.58 2.60 0.00 0.00 2522513.93 65297.69 3935019.75 00:13:55.799 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x200 00:13:55.799 raid0 : 6.45 44.66 2.79 0.00 0.00 2315008.56 1757.56 4331572.13 00:13:55.799 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x200 length 0x200 00:13:55.799 raid0 : 6.45 51.31 3.21 0.00 0.00 2011976.14 1705.43 4118043.93 00:13:55.799 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x200 00:13:55.799 concat0 : 6.40 55.03 3.44 0.00 0.00 1860656.25 1675.64 4179051.99 00:13:55.799 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x200 length 0x200 00:13:55.799 concat0 : 6.45 64.77 4.05 0.00 0.00 1563262.32 1772.45 3965523.78 00:13:55.799 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x100 00:13:55.799 raid1 : 6.40 61.73 3.86 0.00 0.00 1626709.40 2338.44 4057035.87 00:13:55.799 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x100 length 0x100 00:13:55.799 raid1 : 6.54 66.02 4.13 0.00 0.00 1501220.50 2487.39 3828255.65 00:13:55.799 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x0 length 0x4e 00:13:55.799 AIO0 : 6.49 66.60 4.16 0.00 0.00 901779.95 711.21 2394566.28 00:13:55.799 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:55.799 Verification LBA range: start 0x4e length 0x4e 00:13:55.799 AIO0 : 6.57 69.73 4.36 0.00 0.00 846310.90 882.50 2287802.18 00:13:55.799 =================================================================================================================== 00:13:55.799 Total : 1665.60 104.10 0.00 0.00 1286755.31 651.64 4789132.57 00:13:56.752 00:13:56.752 real 0m10.768s 00:13:56.752 user 0m19.758s 00:13:56.753 sys 0m0.512s 00:13:56.753 13:56:45 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.753 ************************************ 00:13:56.753 END TEST bdev_verify_big_io 00:13:56.753 ************************************ 00:13:56.753 13:56:45 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.011 13:56:45 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.011 13:56:45 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:57.011 13:56:45 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.011 13:56:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:57.011 ************************************ 00:13:57.011 START TEST bdev_write_zeroes 00:13:57.011 ************************************ 00:13:57.011 13:56:45 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.011 [2024-07-25 13:56:45.901162] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:13:57.011 [2024-07-25 13:56:45.901373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118166 ] 00:13:57.269 [2024-07-25 13:56:46.068471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.269 [2024-07-25 13:56:46.293105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.835 [2024-07-25 13:56:46.688432] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:57.835 [2024-07-25 13:56:46.688845] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:57.835 [2024-07-25 13:56:46.696376] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:57.835 [2024-07-25 13:56:46.696560] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:57.835 [2024-07-25 13:56:46.704391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:57.835 [2024-07-25 13:56:46.704580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:57.835 [2024-07-25 13:56:46.704775] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:58.092 [2024-07-25 13:56:46.900099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:58.093 [2024-07-25 13:56:46.900439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.093 [2024-07-25 13:56:46.900592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:58.093 [2024-07-25 13:56:46.900750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.093 [2024-07-25 13:56:46.903504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.093 [2024-07-25 13:56:46.903839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:58.351 Running I/O for 1 seconds... 00:13:59.766 00:13:59.766 Latency(us) 00:13:59.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.766 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc0 : 1.05 4748.07 18.55 0.00 0.00 26934.36 718.66 45279.42 00:13:59.766 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc1p0 : 1.05 4741.88 18.52 0.00 0.00 26928.92 983.04 44326.17 00:13:59.766 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc1p1 : 1.05 4735.67 18.50 0.00 0.00 26905.15 1005.38 43611.23 00:13:59.766 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p0 : 1.06 4729.76 18.48 0.00 0.00 26884.04 983.04 42657.98 00:13:59.766 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p1 : 1.06 4723.85 18.45 0.00 0.00 26856.04 1035.17 41704.73 00:13:59.766 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p2 : 1.06 4717.36 18.43 0.00 0.00 26831.82 990.49 40751.48 00:13:59.766 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p3 : 1.06 4711.33 18.40 0.00 0.00 26801.70 968.15 39798.23 00:13:59.766 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p4 : 1.06 4705.26 18.38 0.00 0.00 26782.61 990.49 38844.97 00:13:59.766 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p5 : 1.06 4699.00 18.36 0.00 0.00 26751.97 1050.07 37653.41 00:13:59.766 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p6 : 1.06 4692.92 18.33 0.00 0.00 26729.07 990.49 36700.16 00:13:59.766 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 Malloc2p7 : 1.07 4686.67 18.31 0.00 0.00 26692.81 968.15 35746.91 00:13:59.766 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 TestPT : 1.07 4680.56 18.28 0.00 0.00 26678.42 1027.72 34555.35 00:13:59.766 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 raid0 : 1.07 4673.37 18.26 0.00 0.00 26643.56 1802.24 32887.16 00:13:59.766 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 concat0 : 1.07 4665.84 18.23 0.00 0.00 26582.88 1765.00 30980.65 00:13:59.766 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 raid1 : 1.07 4657.52 18.19 0.00 0.00 26513.15 2785.28 28597.53 00:13:59.766 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.766 AIO0 : 1.07 4646.44 18.15 0.00 0.00 26425.82 1586.27 28597.53 00:13:59.767 =================================================================================================================== 00:13:59.767 Total : 75215.50 293.81 0.00 0.00 26746.42 718.66 45279.42 00:14:01.689 00:14:01.689 real 0m4.649s 00:14:01.689 user 0m4.022s 00:14:01.689 sys 0m0.432s 00:14:01.689 13:56:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.689 13:56:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:01.689 ************************************ 00:14:01.689 END TEST bdev_write_zeroes 00:14:01.689 ************************************ 00:14:01.689 13:56:50 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.689 13:56:50 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:01.689 13:56:50 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.689 13:56:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:01.689 ************************************ 00:14:01.689 START TEST bdev_json_nonenclosed 00:14:01.689 ************************************ 00:14:01.689 13:56:50 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.689 [2024-07-25 13:56:50.604692] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:01.689 [2024-07-25 13:56:50.604954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118246 ] 00:14:01.947 [2024-07-25 13:56:50.778609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.204 [2024-07-25 13:56:51.030048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.204 [2024-07-25 13:56:51.030437] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:02.204 [2024-07-25 13:56:51.030640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:02.204 [2024-07-25 13:56:51.030808] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:02.462 00:14:02.462 real 0m0.890s 00:14:02.462 user 0m0.645s 00:14:02.462 sys 0m0.145s 00:14:02.462 13:56:51 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.462 13:56:51 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:02.462 ************************************ 00:14:02.462 END TEST bdev_json_nonenclosed 00:14:02.462 ************************************ 00:14:02.462 13:56:51 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.462 13:56:51 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:02.462 13:56:51 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.462 13:56:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:02.462 ************************************ 00:14:02.462 START TEST bdev_json_nonarray 00:14:02.462 ************************************ 00:14:02.462 13:56:51 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:02.720 [2024-07-25 13:56:51.551123] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:02.720 [2024-07-25 13:56:51.551388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118283 ] 00:14:02.720 [2024-07-25 13:56:51.727446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.978 [2024-07-25 13:56:51.943355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.978 [2024-07-25 13:56:51.943733] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:02.978 [2024-07-25 13:56:51.943923] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:02.978 [2024-07-25 13:56:51.944095] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.544 00:14:03.544 real 0m0.864s 00:14:03.544 user 0m0.610s 00:14:03.544 sys 0m0.152s 00:14:03.544 13:56:52 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.544 13:56:52 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:03.544 ************************************ 00:14:03.544 END TEST bdev_json_nonarray 00:14:03.544 ************************************ 00:14:03.544 13:56:52 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:14:03.544 13:56:52 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:14:03.544 13:56:52 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:03.544 13:56:52 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.544 13:56:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:03.544 ************************************ 00:14:03.544 START TEST bdev_qos 00:14:03.544 ************************************ 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=118317 00:14:03.544 Process qos testing pid: 118317 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 118317' 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 118317 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 118317 ']' 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.544 13:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:03.544 [2024-07-25 13:56:52.459159] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:03.544 [2024-07-25 13:56:52.459376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118317 ] 00:14:03.802 [2024-07-25 13:56:52.629745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.060 [2024-07-25 13:56:52.886623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.625 Malloc_0 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.625 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.625 [ 00:14:04.625 { 00:14:04.625 "name": "Malloc_0", 00:14:04.625 "aliases": [ 00:14:04.625 "73c576a7-4b4a-4ce8-8770-67a25a521698" 00:14:04.625 ], 00:14:04.625 "product_name": "Malloc disk", 00:14:04.625 "block_size": 512, 00:14:04.625 "num_blocks": 262144, 00:14:04.625 "uuid": "73c576a7-4b4a-4ce8-8770-67a25a521698", 00:14:04.625 "assigned_rate_limits": { 00:14:04.625 "rw_ios_per_sec": 0, 00:14:04.625 "rw_mbytes_per_sec": 0, 00:14:04.625 "r_mbytes_per_sec": 0, 00:14:04.625 "w_mbytes_per_sec": 0 00:14:04.625 }, 00:14:04.625 "claimed": false, 00:14:04.625 "zoned": false, 00:14:04.626 "supported_io_types": { 00:14:04.626 "read": true, 00:14:04.626 "write": true, 00:14:04.626 "unmap": true, 00:14:04.626 "flush": true, 00:14:04.626 "reset": true, 00:14:04.626 "nvme_admin": false, 00:14:04.626 "nvme_io": false, 00:14:04.626 "nvme_io_md": false, 00:14:04.626 "write_zeroes": true, 00:14:04.626 "zcopy": true, 00:14:04.626 "get_zone_info": false, 00:14:04.626 "zone_management": false, 00:14:04.626 "zone_append": false, 00:14:04.626 "compare": false, 00:14:04.626 "compare_and_write": false, 00:14:04.626 "abort": true, 00:14:04.626 "seek_hole": false, 00:14:04.626 "seek_data": false, 00:14:04.626 "copy": true, 00:14:04.626 "nvme_iov_md": false 00:14:04.626 }, 00:14:04.626 "memory_domains": [ 00:14:04.626 { 00:14:04.626 "dma_device_id": "system", 00:14:04.626 "dma_device_type": 1 00:14:04.626 }, 00:14:04.626 { 00:14:04.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.626 "dma_device_type": 2 00:14:04.626 } 00:14:04.626 ], 00:14:04.626 "driver_specific": {} 00:14:04.626 } 00:14:04.626 ] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.626 Null_1 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:04.626 [ 00:14:04.626 { 00:14:04.626 "name": "Null_1", 00:14:04.626 "aliases": [ 00:14:04.626 "7c36b088-e4f6-4d65-ac8f-888543a0e5d2" 00:14:04.626 ], 00:14:04.626 "product_name": "Null disk", 00:14:04.626 "block_size": 512, 00:14:04.626 "num_blocks": 262144, 00:14:04.626 "uuid": "7c36b088-e4f6-4d65-ac8f-888543a0e5d2", 00:14:04.626 "assigned_rate_limits": { 00:14:04.626 "rw_ios_per_sec": 0, 00:14:04.626 "rw_mbytes_per_sec": 0, 00:14:04.626 "r_mbytes_per_sec": 0, 00:14:04.626 "w_mbytes_per_sec": 0 00:14:04.626 }, 00:14:04.626 "claimed": false, 00:14:04.626 "zoned": false, 00:14:04.626 "supported_io_types": { 00:14:04.626 "read": true, 00:14:04.626 "write": true, 00:14:04.626 "unmap": false, 00:14:04.626 "flush": false, 00:14:04.626 "reset": true, 00:14:04.626 "nvme_admin": false, 00:14:04.626 "nvme_io": false, 00:14:04.626 "nvme_io_md": false, 00:14:04.626 "write_zeroes": true, 00:14:04.626 "zcopy": false, 00:14:04.626 "get_zone_info": false, 00:14:04.626 "zone_management": false, 00:14:04.626 "zone_append": false, 00:14:04.626 "compare": false, 00:14:04.626 "compare_and_write": false, 00:14:04.626 "abort": true, 00:14:04.626 "seek_hole": false, 00:14:04.626 "seek_data": false, 00:14:04.626 "copy": false, 00:14:04.626 "nvme_iov_md": false 00:14:04.626 }, 00:14:04.626 "driver_specific": {} 00:14:04.626 } 00:14:04.626 ] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:14:04.626 13:56:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:14:04.883 Running I/O for 60 seconds... 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 62651.83 250607.33 0.00 0.00 252928.00 0.00 0.00 ' 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=62651.83 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 62651 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=62651 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=15000 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 15000 -gt 1000 ']' 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.145 13:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:10.145 ************************************ 00:14:10.145 START TEST bdev_qos_iops 00:14:10.145 ************************************ 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 15000 IOPS Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=15000 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:14:10.145 13:56:58 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 15001.90 60007.58 0.00 0.00 61320.00 0.00 0.00 ' 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=15001.90 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 15001 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=15001 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=13500 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=16500 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15001 -lt 13500 ']' 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15001 -gt 16500 ']' 00:14:15.412 00:14:15.412 real 0m5.219s 00:14:15.412 user 0m0.114s 00:14:15.412 sys 0m0.035s 00:14:15.412 ************************************ 00:14:15.412 END TEST bdev_qos_iops 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.412 13:57:04 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:14:15.412 ************************************ 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:14:15.413 13:57:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 23243.29 92973.18 0.00 0.00 95232.00 0.00 0.00 ' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=95232.00 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 95232 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=95232 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=9 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 9 -lt 2 ']' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.707 13:57:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:20.707 ************************************ 00:14:20.707 START TEST bdev_qos_bw 00:14:20.707 ************************************ 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 9 BANDWIDTH Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=9 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:14:20.707 13:57:09 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2303.99 9215.97 0.00 0.00 9504.00 0.00 0.00 ' 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=9504.00 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 9504 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=9504 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=9216 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=8294 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=10137 00:14:25.996 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 9504 -lt 8294 ']' 00:14:25.996 ************************************ 00:14:25.997 END TEST bdev_qos_bw 00:14:25.997 ************************************ 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 9504 -gt 10137 ']' 00:14:25.997 00:14:25.997 real 0m5.272s 00:14:25.997 user 0m0.118s 00:14:25.997 sys 0m0.033s 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.997 13:57:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:25.997 ************************************ 00:14:25.997 START TEST bdev_qos_ro_bw 00:14:25.997 ************************************ 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:14:25.997 13:57:14 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.82 2047.28 0.00 0.00 2064.00 0.00 0.00 ' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']' 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']' 00:14:31.257 00:14:31.257 real 0m5.195s 00:14:31.257 user 0m0.124s 00:14:31.257 sys 0m0.045s 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.257 ************************************ 00:14:31.257 END TEST bdev_qos_ro_bw 00:14:31.257 ************************************ 00:14:31.257 13:57:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:14:31.257 13:57:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:31.257 13:57:19 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.257 13:57:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:31.824 00:14:31.824 Latency(us) 00:14:31.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.824 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:31.824 Malloc_0 : 26.71 20657.96 80.70 0.00 0.00 12278.84 2219.29 503316.48 00:14:31.824 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:31.824 Null_1 : 26.94 22021.68 86.02 0.00 0.00 11595.09 845.27 224967.21 00:14:31.824 =================================================================================================================== 00:14:31.824 Total : 42679.64 166.72 0.00 0.00 11924.60 845.27 503316.48 00:14:31.824 0 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 118317 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 118317 ']' 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 118317 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118317 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:31.824 killing process with pid 118317 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118317' 00:14:31.824 Received shutdown signal, test time was about 26.971639 seconds 00:14:31.824 00:14:31.824 Latency(us) 00:14:31.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.824 =================================================================================================================== 00:14:31.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 118317 00:14:31.824 13:57:20 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 118317 00:14:33.199 13:57:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:14:33.199 00:14:33.199 real 0m29.756s 00:14:33.199 user 0m30.551s 00:14:33.199 sys 0m0.655s 00:14:33.199 13:57:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.199 ************************************ 00:14:33.199 END TEST bdev_qos 00:14:33.199 ************************************ 00:14:33.199 13:57:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:33.199 13:57:22 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:33.199 13:57:22 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.199 13:57:22 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.199 13:57:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:33.199 ************************************ 00:14:33.199 START TEST bdev_qd_sampling 00:14:33.199 ************************************ 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=118794 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:33.200 Process bdev QD sampling period testing pid: 118794 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 118794' 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 118794 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 118794 ']' 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.200 13:57:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:33.458 [2024-07-25 13:57:22.267648] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:33.458 [2024-07-25 13:57:22.267975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118794 ] 00:14:33.458 [2024-07-25 13:57:22.437679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.716 [2024-07-25 13:57:22.695946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.716 [2024-07-25 13:57:22.695954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.283 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.283 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:14:34.283 13:57:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:34.283 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.283 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 Malloc_QD 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:34.557 [ 00:14:34.557 { 00:14:34.557 "name": "Malloc_QD", 00:14:34.557 "aliases": [ 00:14:34.557 "7cadb370-1d5d-44d3-a956-3c840d4d679a" 00:14:34.557 ], 00:14:34.557 "product_name": "Malloc disk", 00:14:34.557 "block_size": 512, 00:14:34.557 "num_blocks": 262144, 00:14:34.557 "uuid": "7cadb370-1d5d-44d3-a956-3c840d4d679a", 00:14:34.557 "assigned_rate_limits": { 00:14:34.557 "rw_ios_per_sec": 0, 00:14:34.557 "rw_mbytes_per_sec": 0, 00:14:34.557 "r_mbytes_per_sec": 0, 00:14:34.557 "w_mbytes_per_sec": 0 00:14:34.557 }, 00:14:34.557 "claimed": false, 00:14:34.557 "zoned": false, 00:14:34.557 "supported_io_types": { 00:14:34.557 "read": true, 00:14:34.557 "write": true, 00:14:34.557 "unmap": true, 00:14:34.557 "flush": true, 00:14:34.557 "reset": true, 00:14:34.557 "nvme_admin": false, 00:14:34.557 "nvme_io": false, 00:14:34.557 "nvme_io_md": false, 00:14:34.557 "write_zeroes": true, 00:14:34.557 "zcopy": true, 00:14:34.557 "get_zone_info": false, 00:14:34.557 "zone_management": false, 00:14:34.557 "zone_append": false, 00:14:34.557 "compare": false, 00:14:34.557 "compare_and_write": false, 00:14:34.557 "abort": true, 00:14:34.557 "seek_hole": false, 00:14:34.557 "seek_data": false, 00:14:34.557 "copy": true, 00:14:34.557 "nvme_iov_md": false 00:14:34.557 }, 00:14:34.557 "memory_domains": [ 00:14:34.557 { 00:14:34.557 "dma_device_id": "system", 00:14:34.557 "dma_device_type": 1 00:14:34.557 }, 00:14:34.557 { 00:14:34.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.557 "dma_device_type": 2 00:14:34.557 } 00:14:34.557 ], 00:14:34.557 "driver_specific": {} 00:14:34.557 } 00:14:34.557 ] 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:14:34.557 13:57:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.557 Running I/O for 5 seconds... 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:14:36.463 "tick_rate": 2200000000, 00:14:36.463 "ticks": 1712651778880, 00:14:36.463 "bdevs": [ 00:14:36.463 { 00:14:36.463 "name": "Malloc_QD", 00:14:36.463 "bytes_read": 796955136, 00:14:36.463 "num_read_ops": 194563, 00:14:36.463 "bytes_written": 0, 00:14:36.463 "num_write_ops": 0, 00:14:36.463 "bytes_unmapped": 0, 00:14:36.463 "num_unmap_ops": 0, 00:14:36.463 "bytes_copied": 0, 00:14:36.463 "num_copy_ops": 0, 00:14:36.463 "read_latency_ticks": 2148205775307, 00:14:36.463 "max_read_latency_ticks": 12414609, 00:14:36.463 "min_read_latency_ticks": 385598, 00:14:36.463 "write_latency_ticks": 0, 00:14:36.463 "max_write_latency_ticks": 0, 00:14:36.463 "min_write_latency_ticks": 0, 00:14:36.463 "unmap_latency_ticks": 0, 00:14:36.463 "max_unmap_latency_ticks": 0, 00:14:36.463 "min_unmap_latency_ticks": 0, 00:14:36.463 "copy_latency_ticks": 0, 00:14:36.463 "max_copy_latency_ticks": 0, 00:14:36.463 "min_copy_latency_ticks": 0, 00:14:36.463 "io_error": {}, 00:14:36.463 "queue_depth_polling_period": 10, 00:14:36.463 "queue_depth": 512, 00:14:36.463 "io_time": 30, 00:14:36.463 "weighted_io_time": 15360 00:14:36.463 } 00:14:36.463 ] 00:14:36.463 }' 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.463 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:36.722 00:14:36.722 Latency(us) 00:14:36.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.722 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:36.722 Malloc_QD : 1.99 51062.26 199.46 0.00 0.00 5000.48 1221.35 6047.19 00:14:36.722 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:36.722 Malloc_QD : 1.99 50886.82 198.78 0.00 0.00 5017.86 919.74 5510.98 00:14:36.722 =================================================================================================================== 00:14:36.722 Total : 101949.08 398.24 0.00 0.00 5009.15 919.74 6047.19 00:14:36.722 0 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 118794 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 118794 ']' 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 118794 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118794 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118794' 00:14:36.722 killing process with pid 118794 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 118794 00:14:36.722 Received shutdown signal, test time was about 2.134248 seconds 00:14:36.722 00:14:36.722 Latency(us) 00:14:36.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.722 =================================================================================================================== 00:14:36.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.722 13:57:25 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 118794 00:14:38.174 ************************************ 00:14:38.174 END TEST bdev_qd_sampling 00:14:38.174 ************************************ 00:14:38.174 13:57:27 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:14:38.174 00:14:38.174 real 0m4.825s 00:14:38.174 user 0m8.908s 00:14:38.174 sys 0m0.427s 00:14:38.174 13:57:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.174 13:57:27 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:38.174 13:57:27 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:14:38.174 13:57:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:38.174 13:57:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.174 13:57:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:38.174 ************************************ 00:14:38.174 START TEST bdev_error 00:14:38.174 ************************************ 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=118896 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 118896' 00:14:38.175 Process error testing pid: 118896 00:14:38.175 13:57:27 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 118896 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 118896 ']' 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.175 13:57:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.175 [2024-07-25 13:57:27.161580] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:38.175 [2024-07-25 13:57:27.162018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118896 ] 00:14:38.433 [2024-07-25 13:57:27.338620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.692 [2024-07-25 13:57:27.578230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:14:39.260 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.260 Dev_1 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.260 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.260 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 [ 00:14:39.519 { 00:14:39.519 "name": "Dev_1", 00:14:39.519 "aliases": [ 00:14:39.519 "6001899c-d51f-4663-9aaa-2dbf65b793a6" 00:14:39.519 ], 00:14:39.519 "product_name": "Malloc disk", 00:14:39.519 "block_size": 512, 00:14:39.519 "num_blocks": 262144, 00:14:39.519 "uuid": "6001899c-d51f-4663-9aaa-2dbf65b793a6", 00:14:39.519 "assigned_rate_limits": { 00:14:39.519 "rw_ios_per_sec": 0, 00:14:39.519 "rw_mbytes_per_sec": 0, 00:14:39.519 "r_mbytes_per_sec": 0, 00:14:39.519 "w_mbytes_per_sec": 0 00:14:39.519 }, 00:14:39.519 "claimed": false, 00:14:39.519 "zoned": false, 00:14:39.519 "supported_io_types": { 00:14:39.519 "read": true, 00:14:39.519 "write": true, 00:14:39.519 "unmap": true, 00:14:39.519 "flush": true, 00:14:39.519 "reset": true, 00:14:39.519 "nvme_admin": false, 00:14:39.519 "nvme_io": false, 00:14:39.519 "nvme_io_md": false, 00:14:39.519 "write_zeroes": true, 00:14:39.519 "zcopy": true, 00:14:39.519 "get_zone_info": false, 00:14:39.519 "zone_management": false, 00:14:39.519 "zone_append": false, 00:14:39.519 "compare": false, 00:14:39.519 "compare_and_write": false, 00:14:39.519 "abort": true, 00:14:39.519 "seek_hole": false, 00:14:39.519 "seek_data": false, 00:14:39.519 "copy": true, 00:14:39.519 "nvme_iov_md": false 00:14:39.519 }, 00:14:39.519 "memory_domains": [ 00:14:39.519 { 00:14:39.519 "dma_device_id": "system", 00:14:39.519 "dma_device_type": 1 00:14:39.519 }, 00:14:39.519 { 00:14:39.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.519 "dma_device_type": 2 00:14:39.519 } 00:14:39.519 ], 00:14:39.519 "driver_specific": {} 00:14:39.519 } 00:14:39.519 ] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:39.519 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 true 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 Dev_2 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.519 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.519 [ 00:14:39.519 { 00:14:39.519 "name": "Dev_2", 00:14:39.519 "aliases": [ 00:14:39.519 "6ec51eca-6f6c-4b37-b143-17edb348cb5e" 00:14:39.519 ], 00:14:39.519 "product_name": "Malloc disk", 00:14:39.519 "block_size": 512, 00:14:39.519 "num_blocks": 262144, 00:14:39.519 "uuid": "6ec51eca-6f6c-4b37-b143-17edb348cb5e", 00:14:39.519 "assigned_rate_limits": { 00:14:39.519 "rw_ios_per_sec": 0, 00:14:39.519 "rw_mbytes_per_sec": 0, 00:14:39.519 "r_mbytes_per_sec": 0, 00:14:39.519 "w_mbytes_per_sec": 0 00:14:39.519 }, 00:14:39.519 "claimed": false, 00:14:39.519 "zoned": false, 00:14:39.519 "supported_io_types": { 00:14:39.519 "read": true, 00:14:39.519 "write": true, 00:14:39.519 "unmap": true, 00:14:39.519 "flush": true, 00:14:39.519 "reset": true, 00:14:39.519 "nvme_admin": false, 00:14:39.519 "nvme_io": false, 00:14:39.519 "nvme_io_md": false, 00:14:39.519 "write_zeroes": true, 00:14:39.519 "zcopy": true, 00:14:39.519 "get_zone_info": false, 00:14:39.519 "zone_management": false, 00:14:39.519 "zone_append": false, 00:14:39.519 "compare": false, 00:14:39.519 "compare_and_write": false, 00:14:39.519 "abort": true, 00:14:39.519 "seek_hole": false, 00:14:39.519 "seek_data": false, 00:14:39.519 "copy": true, 00:14:39.519 "nvme_iov_md": false 00:14:39.519 }, 00:14:39.519 "memory_domains": [ 00:14:39.519 { 00:14:39.519 "dma_device_id": "system", 00:14:39.519 "dma_device_type": 1 00:14:39.520 }, 00:14:39.520 { 00:14:39.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.520 "dma_device_type": 2 00:14:39.520 } 00:14:39.520 ], 00:14:39.520 "driver_specific": {} 00:14:39.520 } 00:14:39.520 ] 00:14:39.520 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.520 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:39.520 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:39.520 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.520 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.520 13:57:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.520 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:14:39.520 13:57:28 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:39.778 Running I/O for 5 seconds... 00:14:40.711 Process is existed as continue on error is set. Pid: 118896 00:14:40.711 13:57:29 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 118896 00:14:40.711 13:57:29 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 118896' 00:14:40.711 13:57:29 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:40.711 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.711 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.711 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.711 13:57:29 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:40.711 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.711 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.711 Timeout while waiting for response: 00:14:40.711 00:14:40.711 00:14:40.970 13:57:29 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.970 13:57:29 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:14:45.157 00:14:45.157 Latency(us) 00:14:45.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.157 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:45.157 EE_Dev_1 : 0.91 34557.13 134.99 5.50 0.00 459.40 193.63 860.16 00:14:45.157 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:45.157 Dev_2 : 5.00 70436.21 275.14 0.00 0.00 223.69 58.41 331731.32 00:14:45.157 =================================================================================================================== 00:14:45.157 Total : 104993.34 410.13 5.50 0.00 243.01 58.41 331731.32 00:14:46.093 13:57:34 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 118896 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 118896 ']' 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 118896 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118896 00:14:46.093 killing process with pid 118896 00:14:46.093 Received shutdown signal, test time was about 5.000000 seconds 00:14:46.093 00:14:46.093 Latency(us) 00:14:46.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.093 =================================================================================================================== 00:14:46.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118896' 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 118896 00:14:46.093 13:57:34 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 118896 00:14:47.469 Process error testing pid: 119015 00:14:47.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.469 13:57:36 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=119015 00:14:47.469 13:57:36 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:47.469 13:57:36 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 119015' 00:14:47.469 13:57:36 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 119015 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 119015 ']' 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.469 13:57:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:47.469 [2024-07-25 13:57:36.372367] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:47.469 [2024-07-25 13:57:36.372763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119015 ] 00:14:47.727 [2024-07-25 13:57:36.531458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.727 [2024-07-25 13:57:36.748201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:14:48.657 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.657 Dev_1 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.657 [ 00:14:48.657 { 00:14:48.657 "name": "Dev_1", 00:14:48.657 "aliases": [ 00:14:48.657 "1c97f4a6-7b11-4871-90ba-3f57d2244cb8" 00:14:48.657 ], 00:14:48.657 "product_name": "Malloc disk", 00:14:48.657 "block_size": 512, 00:14:48.657 "num_blocks": 262144, 00:14:48.657 "uuid": "1c97f4a6-7b11-4871-90ba-3f57d2244cb8", 00:14:48.657 "assigned_rate_limits": { 00:14:48.657 "rw_ios_per_sec": 0, 00:14:48.657 "rw_mbytes_per_sec": 0, 00:14:48.657 "r_mbytes_per_sec": 0, 00:14:48.657 "w_mbytes_per_sec": 0 00:14:48.657 }, 00:14:48.657 "claimed": false, 00:14:48.657 "zoned": false, 00:14:48.657 "supported_io_types": { 00:14:48.657 "read": true, 00:14:48.657 "write": true, 00:14:48.657 "unmap": true, 00:14:48.657 "flush": true, 00:14:48.657 "reset": true, 00:14:48.657 "nvme_admin": false, 00:14:48.657 "nvme_io": false, 00:14:48.657 "nvme_io_md": false, 00:14:48.657 "write_zeroes": true, 00:14:48.657 "zcopy": true, 00:14:48.657 "get_zone_info": false, 00:14:48.657 "zone_management": false, 00:14:48.657 "zone_append": false, 00:14:48.657 "compare": false, 00:14:48.657 "compare_and_write": false, 00:14:48.657 "abort": true, 00:14:48.657 "seek_hole": false, 00:14:48.657 "seek_data": false, 00:14:48.657 "copy": true, 00:14:48.657 "nvme_iov_md": false 00:14:48.657 }, 00:14:48.657 "memory_domains": [ 00:14:48.657 { 00:14:48.657 "dma_device_id": "system", 00:14:48.657 "dma_device_type": 1 00:14:48.657 }, 00:14:48.657 { 00:14:48.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.657 "dma_device_type": 2 00:14:48.657 } 00:14:48.657 ], 00:14:48.657 "driver_specific": {} 00:14:48.657 } 00:14:48.657 ] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:48.657 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.657 true 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.657 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.657 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 Dev_2 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 [ 00:14:48.915 { 00:14:48.915 "name": "Dev_2", 00:14:48.915 "aliases": [ 00:14:48.915 "5dfc054a-1f7e-4b0d-9da6-b3ff6f59e310" 00:14:48.915 ], 00:14:48.915 "product_name": "Malloc disk", 00:14:48.915 "block_size": 512, 00:14:48.915 "num_blocks": 262144, 00:14:48.915 "uuid": "5dfc054a-1f7e-4b0d-9da6-b3ff6f59e310", 00:14:48.915 "assigned_rate_limits": { 00:14:48.915 "rw_ios_per_sec": 0, 00:14:48.915 "rw_mbytes_per_sec": 0, 00:14:48.915 "r_mbytes_per_sec": 0, 00:14:48.915 "w_mbytes_per_sec": 0 00:14:48.915 }, 00:14:48.915 "claimed": false, 00:14:48.915 "zoned": false, 00:14:48.915 "supported_io_types": { 00:14:48.915 "read": true, 00:14:48.915 "write": true, 00:14:48.915 "unmap": true, 00:14:48.915 "flush": true, 00:14:48.915 "reset": true, 00:14:48.915 "nvme_admin": false, 00:14:48.915 "nvme_io": false, 00:14:48.915 "nvme_io_md": false, 00:14:48.915 "write_zeroes": true, 00:14:48.915 "zcopy": true, 00:14:48.915 "get_zone_info": false, 00:14:48.915 "zone_management": false, 00:14:48.915 "zone_append": false, 00:14:48.915 "compare": false, 00:14:48.915 "compare_and_write": false, 00:14:48.915 "abort": true, 00:14:48.915 "seek_hole": false, 00:14:48.915 "seek_data": false, 00:14:48.915 "copy": true, 00:14:48.915 "nvme_iov_md": false 00:14:48.915 }, 00:14:48.915 "memory_domains": [ 00:14:48.915 { 00:14:48.915 "dma_device_id": "system", 00:14:48.915 "dma_device_type": 1 00:14:48.915 }, 00:14:48.915 { 00:14:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.915 "dma_device_type": 2 00:14:48.915 } 00:14:48.915 ], 00:14:48.915 "driver_specific": {} 00:14:48.915 } 00:14:48.915 ] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:48.915 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.915 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 119015 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:14:48.915 13:57:37 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 119015 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.915 13:57:37 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 119015 00:14:48.915 Running I/O for 5 seconds... 00:14:48.915 task offset: 83840 on job bdev=EE_Dev_1 fails 00:14:48.915 00:14:48.915 Latency(us) 00:14:48.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.915 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:48.915 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:48.915 EE_Dev_1 : 0.00 8002.91 31.26 1818.84 0.00 1367.06 554.82 2487.39 00:14:48.915 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:48.915 Dev_2 : 0.01 4907.22 19.17 0.00 0.00 2377.54 495.24 4379.00 00:14:48.915 =================================================================================================================== 00:14:48.915 Total : 12910.13 50.43 1818.84 0.00 1915.12 495.24 4379.00 00:14:48.915 [2024-07-25 13:57:37.872828] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:48.915 request: 00:14:48.915 { 00:14:48.915 "method": "perform_tests", 00:14:48.915 "req_id": 1 00:14:48.915 } 00:14:48.915 Got JSON-RPC error response 00:14:48.915 response: 00:14:48.915 { 00:14:48.915 "code": -32603, 00:14:48.915 "message": "bdevperf failed with error Operation not permitted" 00:14:48.915 } 00:14:50.820 ************************************ 00:14:50.820 END TEST bdev_error 00:14:50.820 ************************************ 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.820 00:14:50.820 real 0m12.575s 00:14:50.820 user 0m12.829s 00:14:50.820 sys 0m0.865s 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.820 13:57:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:50.820 13:57:39 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:14:50.820 13:57:39 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.820 13:57:39 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.820 13:57:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:50.820 ************************************ 00:14:50.820 START TEST bdev_stat 00:14:50.820 ************************************ 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=119080 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 119080' 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:50.820 Process Bdev IO statistics testing pid: 119080 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 119080 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 119080 ']' 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.820 13:57:39 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:50.820 [2024-07-25 13:57:39.786737] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:50.820 [2024-07-25 13:57:39.787115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119080 ] 00:14:51.089 [2024-07-25 13:57:39.959902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:51.347 [2024-07-25 13:57:40.226118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.347 [2024-07-25 13:57:40.226135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.913 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.913 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:14:51.913 13:57:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:51.913 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.913 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:52.171 Malloc_STAT 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.171 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:52.171 [ 00:14:52.171 { 00:14:52.171 "name": "Malloc_STAT", 00:14:52.171 "aliases": [ 00:14:52.171 "5d398ea4-a16b-4f66-9954-f6dab5f83b6a" 00:14:52.171 ], 00:14:52.171 "product_name": "Malloc disk", 00:14:52.171 "block_size": 512, 00:14:52.171 "num_blocks": 262144, 00:14:52.171 "uuid": "5d398ea4-a16b-4f66-9954-f6dab5f83b6a", 00:14:52.171 "assigned_rate_limits": { 00:14:52.171 "rw_ios_per_sec": 0, 00:14:52.171 "rw_mbytes_per_sec": 0, 00:14:52.171 "r_mbytes_per_sec": 0, 00:14:52.171 "w_mbytes_per_sec": 0 00:14:52.171 }, 00:14:52.171 "claimed": false, 00:14:52.172 "zoned": false, 00:14:52.172 "supported_io_types": { 00:14:52.172 "read": true, 00:14:52.172 "write": true, 00:14:52.172 "unmap": true, 00:14:52.172 "flush": true, 00:14:52.172 "reset": true, 00:14:52.172 "nvme_admin": false, 00:14:52.172 "nvme_io": false, 00:14:52.172 "nvme_io_md": false, 00:14:52.172 "write_zeroes": true, 00:14:52.172 "zcopy": true, 00:14:52.172 "get_zone_info": false, 00:14:52.172 "zone_management": false, 00:14:52.172 "zone_append": false, 00:14:52.172 "compare": false, 00:14:52.172 "compare_and_write": false, 00:14:52.172 "abort": true, 00:14:52.172 "seek_hole": false, 00:14:52.172 "seek_data": false, 00:14:52.172 "copy": true, 00:14:52.172 "nvme_iov_md": false 00:14:52.172 }, 00:14:52.172 "memory_domains": [ 00:14:52.172 { 00:14:52.172 "dma_device_id": "system", 00:14:52.172 "dma_device_type": 1 00:14:52.172 }, 00:14:52.172 { 00:14:52.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.172 "dma_device_type": 2 00:14:52.172 } 00:14:52.172 ], 00:14:52.172 "driver_specific": {} 00:14:52.172 } 00:14:52.172 ] 00:14:52.172 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.172 13:57:40 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:14:52.172 13:57:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:14:52.172 13:57:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.172 Running I/O for 10 seconds... 00:14:54.071 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:14:54.071 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:14:54.072 "tick_rate": 2200000000, 00:14:54.072 "ticks": 1751255454581, 00:14:54.072 "bdevs": [ 00:14:54.072 { 00:14:54.072 "name": "Malloc_STAT", 00:14:54.072 "bytes_read": 824218112, 00:14:54.072 "num_read_ops": 201219, 00:14:54.072 "bytes_written": 0, 00:14:54.072 "num_write_ops": 0, 00:14:54.072 "bytes_unmapped": 0, 00:14:54.072 "num_unmap_ops": 0, 00:14:54.072 "bytes_copied": 0, 00:14:54.072 "num_copy_ops": 0, 00:14:54.072 "read_latency_ticks": 2137341356106, 00:14:54.072 "max_read_latency_ticks": 13995263, 00:14:54.072 "min_read_latency_ticks": 328869, 00:14:54.072 "write_latency_ticks": 0, 00:14:54.072 "max_write_latency_ticks": 0, 00:14:54.072 "min_write_latency_ticks": 0, 00:14:54.072 "unmap_latency_ticks": 0, 00:14:54.072 "max_unmap_latency_ticks": 0, 00:14:54.072 "min_unmap_latency_ticks": 0, 00:14:54.072 "copy_latency_ticks": 0, 00:14:54.072 "max_copy_latency_ticks": 0, 00:14:54.072 "min_copy_latency_ticks": 0, 00:14:54.072 "io_error": {} 00:14:54.072 } 00:14:54.072 ] 00:14:54.072 }' 00:14:54.072 13:57:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=201219 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:14:54.072 "tick_rate": 2200000000, 00:14:54.072 "ticks": 1751406994852, 00:14:54.072 "name": "Malloc_STAT", 00:14:54.072 "channels": [ 00:14:54.072 { 00:14:54.072 "thread_id": 2, 00:14:54.072 "bytes_read": 424673280, 00:14:54.072 "num_read_ops": 103680, 00:14:54.072 "bytes_written": 0, 00:14:54.072 "num_write_ops": 0, 00:14:54.072 "bytes_unmapped": 0, 00:14:54.072 "num_unmap_ops": 0, 00:14:54.072 "bytes_copied": 0, 00:14:54.072 "num_copy_ops": 0, 00:14:54.072 "read_latency_ticks": 1106148624109, 00:14:54.072 "max_read_latency_ticks": 13995263, 00:14:54.072 "min_read_latency_ticks": 8180279, 00:14:54.072 "write_latency_ticks": 0, 00:14:54.072 "max_write_latency_ticks": 0, 00:14:54.072 "min_write_latency_ticks": 0, 00:14:54.072 "unmap_latency_ticks": 0, 00:14:54.072 "max_unmap_latency_ticks": 0, 00:14:54.072 "min_unmap_latency_ticks": 0, 00:14:54.072 "copy_latency_ticks": 0, 00:14:54.072 "max_copy_latency_ticks": 0, 00:14:54.072 "min_copy_latency_ticks": 0 00:14:54.072 }, 00:14:54.072 { 00:14:54.072 "thread_id": 3, 00:14:54.072 "bytes_read": 428867584, 00:14:54.072 "num_read_ops": 104704, 00:14:54.072 "bytes_written": 0, 00:14:54.072 "num_write_ops": 0, 00:14:54.072 "bytes_unmapped": 0, 00:14:54.072 "num_unmap_ops": 0, 00:14:54.072 "bytes_copied": 0, 00:14:54.072 "num_copy_ops": 0, 00:14:54.072 "read_latency_ticks": 1106778147536, 00:14:54.072 "max_read_latency_ticks": 11822458, 00:14:54.072 "min_read_latency_ticks": 8212267, 00:14:54.072 "write_latency_ticks": 0, 00:14:54.072 "max_write_latency_ticks": 0, 00:14:54.072 "min_write_latency_ticks": 0, 00:14:54.072 "unmap_latency_ticks": 0, 00:14:54.072 "max_unmap_latency_ticks": 0, 00:14:54.072 "min_unmap_latency_ticks": 0, 00:14:54.072 "copy_latency_ticks": 0, 00:14:54.072 "max_copy_latency_ticks": 0, 00:14:54.072 "min_copy_latency_ticks": 0 00:14:54.072 } 00:14:54.072 ] 00:14:54.072 }' 00:14:54.072 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=103680 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=103680 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=104704 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=208384 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:14:54.331 "tick_rate": 2200000000, 00:14:54.331 "ticks": 1751668652245, 00:14:54.331 "bdevs": [ 00:14:54.331 { 00:14:54.331 "name": "Malloc_STAT", 00:14:54.331 "bytes_read": 904958464, 00:14:54.331 "num_read_ops": 220931, 00:14:54.331 "bytes_written": 0, 00:14:54.331 "num_write_ops": 0, 00:14:54.331 "bytes_unmapped": 0, 00:14:54.331 "num_unmap_ops": 0, 00:14:54.331 "bytes_copied": 0, 00:14:54.331 "num_copy_ops": 0, 00:14:54.331 "read_latency_ticks": 2346508517801, 00:14:54.331 "max_read_latency_ticks": 13995263, 00:14:54.331 "min_read_latency_ticks": 328869, 00:14:54.331 "write_latency_ticks": 0, 00:14:54.331 "max_write_latency_ticks": 0, 00:14:54.331 "min_write_latency_ticks": 0, 00:14:54.331 "unmap_latency_ticks": 0, 00:14:54.331 "max_unmap_latency_ticks": 0, 00:14:54.331 "min_unmap_latency_ticks": 0, 00:14:54.331 "copy_latency_ticks": 0, 00:14:54.331 "max_copy_latency_ticks": 0, 00:14:54.331 "min_copy_latency_ticks": 0, 00:14:54.331 "io_error": {} 00:14:54.331 } 00:14:54.331 ] 00:14:54.331 }' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=220931 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 208384 -lt 201219 ']' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 208384 -gt 220931 ']' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 00:14:54.331 Latency(us) 00:14:54.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.331 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:54.331 Malloc_STAT : 2.16 52588.14 205.42 0.00 0.00 4856.38 1191.56 6374.87 00:14:54.331 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:54.331 Malloc_STAT : 2.16 53275.15 208.11 0.00 0.00 4793.70 875.05 5391.83 00:14:54.331 =================================================================================================================== 00:14:54.331 Total : 105863.29 413.53 0.00 0.00 4824.83 875.05 6374.87 00:14:54.331 0 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 119080 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 119080 ']' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 119080 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.331 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119080 00:14:54.590 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.590 killing process with pid 119080 00:14:54.590 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.590 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119080' 00:14:54.590 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 119080 00:14:54.590 13:57:43 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 119080 00:14:54.590 Received shutdown signal, test time was about 2.301778 seconds 00:14:54.590 00:14:54.590 Latency(us) 00:14:54.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.590 =================================================================================================================== 00:14:54.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.966 ************************************ 00:14:55.966 END TEST bdev_stat 00:14:55.966 ************************************ 00:14:55.966 13:57:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:14:55.966 00:14:55.966 real 0m4.986s 00:14:55.966 user 0m9.455s 00:14:55.966 sys 0m0.416s 00:14:55.966 13:57:44 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.966 13:57:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:55.966 13:57:44 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:55.966 ************************************ 00:14:55.966 END TEST blockdev_general 00:14:55.966 ************************************ 00:14:55.966 00:14:55.966 real 2m30.540s 00:14:55.966 user 6m0.617s 00:14:55.966 sys 0m22.908s 00:14:55.966 13:57:44 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.966 13:57:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 13:57:44 -- spdk/autotest.sh@194 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:55.966 13:57:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:55.966 13:57:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.966 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 ************************************ 00:14:55.966 START TEST bdev_raid 00:14:55.966 ************************************ 00:14:55.966 13:57:44 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:55.966 * Looking for test storage... 00:14:55.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:55.966 13:57:44 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@1001 -- # mkdir -p /raidtest 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@1002 -- # trap 'cleanup; exit 1' EXIT 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@1004 -- # base_blocklen=512 00:14:55.966 13:57:44 bdev_raid -- bdev/bdev_raid.sh@1006 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:14:55.966 13:57:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.966 13:57:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.966 13:57:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 ************************************ 00:14:55.966 START TEST raid0_resize_superblock_test 00:14:55.966 ************************************ 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@942 -- # local raid_level=0 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@945 -- # raid_pid=119230 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@946 -- # echo 'Process raid pid: 119230' 00:14:55.966 Process raid pid: 119230 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@944 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@947 -- # waitforlisten 119230 /var/tmp/spdk-raid.sock 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 119230 ']' 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.966 13:57:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.966 [2024-07-25 13:57:44.975730] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:14:55.966 [2024-07-25 13:57:44.976115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.225 [2024-07-25 13:57:45.140267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.484 [2024-07-25 13:57:45.400665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.742 [2024-07-25 13:57:45.613035] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.000 13:57:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.000 13:57:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:57.000 13:57:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@949 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:14:57.934 malloc0 00:14:57.934 13:57:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@951 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:14:58.192 [2024-07-25 13:57:47.201319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:58.193 [2024-07-25 13:57:47.202169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.193 [2024-07-25 13:57:47.202506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:58.193 [2024-07-25 13:57:47.202811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.193 [2024-07-25 13:57:47.205870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.193 [2024-07-25 13:57:47.206180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:58.193 pt0 00:14:58.193 13:57:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@952 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:14:58.759 51c16493-99c3-4e85-be6b-ee42dfbe79c0 00:14:58.759 13:57:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@954 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:14:59.017 c27ab735-a0d7-4d06-b938-5840ac17b7b2 00:14:59.017 13:57:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@955 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:14:59.276 188d71d8-5e0a-42a4-b59a-700c24ec5d39 00:14:59.276 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@957 -- # case $raid_level in 00:14:59.276 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@958 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:14:59.536 [2024-07-25 13:57:48.428464] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev c27ab735-a0d7-4d06-b938-5840ac17b7b2 is claimed 00:14:59.536 [2024-07-25 13:57:48.428891] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev 188d71d8-5e0a-42a4-b59a-700c24ec5d39 is claimed 00:14:59.536 [2024-07-25 13:57:48.429209] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:59.536 [2024-07-25 13:57:48.429346] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:14:59.536 [2024-07-25 13:57:48.429578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:59.536 [2024-07-25 13:57:48.430119] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:59.536 [2024-07-25 13:57:48.430269] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:14:59.536 [2024-07-25 13:57:48.430581] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.536 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:14:59.536 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # jq '.[].num_blocks' 00:14:59.809 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # (( 64 == 64 )) 00:14:59.810 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:14:59.810 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # jq '.[].num_blocks' 00:15:00.070 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # (( 64 == 64 )) 00:15:00.070 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:00.070 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@968 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:00.070 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:00.070 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@968 -- # jq '.[].num_blocks' 00:15:00.329 [2024-07-25 13:57:49.188785] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.329 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:00.329 13:57:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:00.329 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@968 -- # (( 245760 == 245760 )) 00:15:00.329 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@973 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:15:00.588 [2024-07-25 13:57:49.416952] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:00.588 [2024-07-25 13:57:49.417216] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c27ab735-a0d7-4d06-b938-5840ac17b7b2' was resized: old size 131072, new size 204800 00:15:00.588 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@974 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:15:00.847 [2024-07-25 13:57:49.688909] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:00.847 [2024-07-25 13:57:49.689112] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '188d71d8-5e0a-42a4-b59a-700c24ec5d39' was resized: old size 131072, new size 204800 00:15:00.847 [2024-07-25 13:57:49.689532] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:15:00.847 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:15:00.847 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # jq '.[].num_blocks' 00:15:01.105 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # (( 100 == 100 )) 00:15:01.105 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:15:01.105 13:57:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # jq '.[].num_blocks' 00:15:01.363 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # (( 100 == 100 )) 00:15:01.363 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:01.363 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@982 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:01.363 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:01.363 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@982 -- # jq '.[].num_blocks' 00:15:01.621 [2024-07-25 13:57:50.461238] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.621 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:01.621 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:01.621 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@982 -- # (( 393216 == 393216 )) 00:15:01.621 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@986 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:15:01.880 [2024-07-25 13:57:50.701049] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:15:01.880 [2024-07-25 13:57:50.701876] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:15:01.880 [2024-07-25 13:57:50.702071] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.880 [2024-07-25 13:57:50.702203] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:15:01.880 [2024-07-25 13:57:50.702428] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.880 [2024-07-25 13:57:50.702580] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.880 [2024-07-25 13:57:50.702682] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:01.880 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@987 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:15:02.138 [2024-07-25 13:57:50.941092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:15:02.138 [2024-07-25 13:57:50.941603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.138 [2024-07-25 13:57:50.941954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:02.138 [2024-07-25 13:57:50.942296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.138 [2024-07-25 13:57:50.944958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.138 [2024-07-25 13:57:50.945252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:15:02.138 pt0 00:15:02.138 [2024-07-25 13:57:50.947692] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c27ab735-a0d7-4d06-b938-5840ac17b7b2 00:15:02.139 [2024-07-25 13:57:50.947960] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev c27ab735-a0d7-4d06-b938-5840ac17b7b2 is claimed 00:15:02.139 [2024-07-25 13:57:50.948248] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 188d71d8-5e0a-42a4-b59a-700c24ec5d39 00:15:02.139 [2024-07-25 13:57:50.948410] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev 188d71d8-5e0a-42a4-b59a-700c24ec5d39 is claimed 00:15:02.139 [2024-07-25 13:57:50.948664] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 188d71d8-5e0a-42a4-b59a-700c24ec5d39 (2) smaller than existing raid bdev Raid (3) 00:15:02.139 [2024-07-25 13:57:50.948816] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:15:02.139 [2024-07-25 13:57:50.948932] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:15:02.139 [2024-07-25 13:57:50.949062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.139 [2024-07-25 13:57:50.949495] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:15:02.139 [2024-07-25 13:57:50.949618] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012d80 00:15:02.139 [2024-07-25 13:57:50.949932] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.139 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:02.139 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@992 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:02.139 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:02.139 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@992 -- # jq '.[].num_blocks' 00:15:02.397 [2024-07-25 13:57:51.230220] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.397 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:02.397 13:57:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@992 -- # (( 393216 == 393216 )) 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@996 -- # killprocess 119230 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 119230 ']' 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 119230 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119230 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119230' 00:15:02.397 killing process with pid 119230 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 119230 00:15:02.397 [2024-07-25 13:57:51.272990] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.397 13:57:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 119230 00:15:02.397 [2024-07-25 13:57:51.273233] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.397 [2024-07-25 13:57:51.273345] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.397 [2024-07-25 13:57:51.273465] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Raid, state offline 00:15:03.771 [2024-07-25 13:57:52.441298] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.705 13:57:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@998 -- # return 0 00:15:04.705 00:15:04.705 real 0m8.686s 00:15:04.705 user 0m12.863s 00:15:04.705 sys 0m1.071s 00:15:04.705 ************************************ 00:15:04.705 END TEST raid0_resize_superblock_test 00:15:04.705 ************************************ 00:15:04.705 13:57:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.705 13:57:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 13:57:53 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:15:04.705 13:57:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.705 13:57:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.705 13:57:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.705 ************************************ 00:15:04.705 START TEST raid1_resize_superblock_test 00:15:04.705 ************************************ 00:15:04.705 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:15:04.705 13:57:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@942 -- # local raid_level=1 00:15:04.705 13:57:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@945 -- # raid_pid=119400 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@944 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:04.706 Process raid pid: 119400 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@946 -- # echo 'Process raid pid: 119400' 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@947 -- # waitforlisten 119400 /var/tmp/spdk-raid.sock 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 119400 ']' 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:04.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.706 13:57:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.706 [2024-07-25 13:57:53.729466] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:04.706 [2024-07-25 13:57:53.729975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.964 [2024-07-25 13:57:53.901261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.222 [2024-07-25 13:57:54.098530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.479 [2024-07-25 13:57:54.295643] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.738 13:57:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.738 13:57:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:05.738 13:57:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@949 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:15:06.671 malloc0 00:15:06.671 13:57:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@951 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:15:06.671 [2024-07-25 13:57:55.654062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:15:06.671 [2024-07-25 13:57:55.654806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.671 [2024-07-25 13:57:55.655133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:06.671 [2024-07-25 13:57:55.655400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.671 [2024-07-25 13:57:55.658286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.671 [2024-07-25 13:57:55.658622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:15:06.671 pt0 00:15:06.671 13:57:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@952 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:15:07.237 878be960-5842-4d73-b010-79f88d656f98 00:15:07.237 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@954 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:15:07.237 f5b44fbe-05ca-4f9c-8b7e-9134376fa06a 00:15:07.237 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@955 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:15:07.494 f0d5b52a-d8f0-4cb3-8402-618c849c6254 00:15:07.494 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@957 -- # case $raid_level in 00:15:07.494 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@959 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:15:07.752 [2024-07-25 13:57:56.727878] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5b44fbe-05ca-4f9c-8b7e-9134376fa06a is claimed 00:15:07.752 [2024-07-25 13:57:56.728211] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0d5b52a-d8f0-4cb3-8402-618c849c6254 is claimed 00:15:07.752 [2024-07-25 13:57:56.728572] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:07.752 [2024-07-25 13:57:56.728743] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:15:07.752 [2024-07-25 13:57:56.728946] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:07.752 [2024-07-25 13:57:56.729389] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:07.752 [2024-07-25 13:57:56.729540] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:15:07.752 [2024-07-25 13:57:56.729913] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.752 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:15:07.752 13:57:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # jq '.[].num_blocks' 00:15:08.010 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@963 -- # (( 64 == 64 )) 00:15:08.010 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:15:08.010 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # jq '.[].num_blocks' 00:15:08.267 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@964 -- # (( 64 == 64 )) 00:15:08.267 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:08.267 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@969 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:08.267 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:08.267 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@969 -- # jq '.[].num_blocks' 00:15:08.525 [2024-07-25 13:57:57.484222] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.525 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:08.525 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@967 -- # case $raid_level in 00:15:08.525 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@969 -- # (( 122880 == 122880 )) 00:15:08.525 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@973 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:15:08.782 [2024-07-25 13:57:57.720826] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:08.782 [2024-07-25 13:57:57.721061] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f5b44fbe-05ca-4f9c-8b7e-9134376fa06a' was resized: old size 131072, new size 204800 00:15:08.782 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@974 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:15:09.040 [2024-07-25 13:57:57.940791] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:09.040 [2024-07-25 13:57:57.940994] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f0d5b52a-d8f0-4cb3-8402-618c849c6254' was resized: old size 131072, new size 204800 00:15:09.040 [2024-07-25 13:57:57.941327] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:15:09.040 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # jq '.[].num_blocks' 00:15:09.040 13:57:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:15:09.297 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@977 -- # (( 100 == 100 )) 00:15:09.297 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:15:09.297 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # jq '.[].num_blocks' 00:15:09.555 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@978 -- # (( 100 == 100 )) 00:15:09.555 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:09.555 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@983 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:09.555 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:09.555 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@983 -- # jq '.[].num_blocks' 00:15:09.813 [2024-07-25 13:57:58.705046] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.813 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:09.813 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@981 -- # case $raid_level in 00:15:09.813 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@983 -- # (( 196608 == 196608 )) 00:15:09.813 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@986 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:15:10.071 [2024-07-25 13:57:58.936911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:15:10.071 [2024-07-25 13:57:58.937697] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:15:10.071 [2024-07-25 13:57:58.937916] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:15:10.071 [2024-07-25 13:57:58.938200] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.071 [2024-07-25 13:57:58.938629] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.071 [2024-07-25 13:57:58.938867] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.071 [2024-07-25 13:57:58.938994] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:10.071 13:57:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@987 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:15:10.328 [2024-07-25 13:57:59.220954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:15:10.328 [2024-07-25 13:57:59.221488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.328 [2024-07-25 13:57:59.221825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:10.328 [2024-07-25 13:57:59.222111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.328 [2024-07-25 13:57:59.225057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.328 [2024-07-25 13:57:59.225370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:15:10.328 pt0 00:15:10.328 [2024-07-25 13:57:59.228110] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f5b44fbe-05ca-4f9c-8b7e-9134376fa06a 00:15:10.328 [2024-07-25 13:57:59.228331] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev f5b44fbe-05ca-4f9c-8b7e-9134376fa06a is claimed 00:15:10.328 [2024-07-25 13:57:59.228621] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f0d5b52a-d8f0-4cb3-8402-618c849c6254 00:15:10.328 [2024-07-25 13:57:59.228772] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0d5b52a-d8f0-4cb3-8402-618c849c6254 is claimed 00:15:10.328 [2024-07-25 13:57:59.229037] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f0d5b52a-d8f0-4cb3-8402-618c849c6254 (2) smaller than existing raid bdev Raid (3) 00:15:10.328 [2024-07-25 13:57:59.229209] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:15:10.328 [2024-07-25 13:57:59.229325] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:10.328 [2024-07-25 13:57:59.229461] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.328 [2024-07-25 13:57:59.229926] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:15:10.328 [2024-07-25 13:57:59.230057] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012d80 00:15:10.328 [2024-07-25 13:57:59.230352] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.328 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:10.328 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@993 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:10.328 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:10.328 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@993 -- # jq '.[].num_blocks' 00:15:10.587 [2024-07-25 13:57:59.458688] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@991 -- # case $raid_level in 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@993 -- # (( 196608 == 196608 )) 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@996 -- # killprocess 119400 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 119400 ']' 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 119400 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119400 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119400' 00:15:10.587 killing process with pid 119400 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 119400 00:15:10.587 [2024-07-25 13:57:59.505599] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.587 13:57:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 119400 00:15:10.587 [2024-07-25 13:57:59.505824] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.587 [2024-07-25 13:57:59.506006] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.587 [2024-07-25 13:57:59.506138] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Raid, state offline 00:15:11.961 [2024-07-25 13:58:00.779688] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.335 13:58:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@998 -- # return 0 00:15:13.335 00:15:13.335 real 0m8.304s 00:15:13.335 user 0m12.080s 00:15:13.335 sys 0m1.064s 00:15:13.335 13:58:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.335 ************************************ 00:15:13.335 END TEST raid1_resize_superblock_test 00:15:13.335 ************************************ 00:15:13.335 13:58:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.335 13:58:01 bdev_raid -- bdev/bdev_raid.sh@1009 -- # uname -s 00:15:13.335 13:58:02 bdev_raid -- bdev/bdev_raid.sh@1009 -- # '[' Linux = Linux ']' 00:15:13.335 13:58:02 bdev_raid -- bdev/bdev_raid.sh@1009 -- # modprobe -n nbd 00:15:13.335 13:58:02 bdev_raid -- bdev/bdev_raid.sh@1010 -- # has_nbd=true 00:15:13.335 13:58:02 bdev_raid -- bdev/bdev_raid.sh@1011 -- # modprobe nbd 00:15:13.335 13:58:02 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:15:13.335 13:58:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:13.335 13:58:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:13.335 13:58:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.335 ************************************ 00:15:13.335 START TEST raid_function_test_raid0 00:15:13.335 ************************************ 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=119562 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 119562' 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:13.335 Process raid pid: 119562 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 119562 /var/tmp/spdk-raid.sock 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 119562 ']' 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.335 13:58:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:15:13.335 [2024-07-25 13:58:02.103367] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:13.335 [2024-07-25 13:58:02.103800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.335 [2024-07-25 13:58:02.272485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.593 [2024-07-25 13:58:02.488198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.852 [2024-07-25 13:58:02.683664] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:15:14.110 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:14.369 [2024-07-25 13:58:03.401527] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:14.369 [2024-07-25 13:58:03.403770] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:14.369 [2024-07-25 13:58:03.403995] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:14.369 [2024-07-25 13:58:03.404113] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:14.369 [2024-07-25 13:58:03.404278] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:14.369 [2024-07-25 13:58:03.404758] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:14.369 [2024-07-25 13:58:03.404912] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000012a00 00:15:14.369 [2024-07-25 13:58:03.405178] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.369 Base_1 00:15:14.369 Base_2 00:15:14.626 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:14.626 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:14.626 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.884 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:15.143 [2024-07-25 13:58:03.957742] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:15.143 /dev/nbd0 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:15.143 13:58:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.143 1+0 records in 00:15:15.143 1+0 records out 00:15:15.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617005 s, 6.6 MB/s 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:15.143 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:15.402 { 00:15:15.402 "nbd_device": "/dev/nbd0", 00:15:15.402 "bdev_name": "raid" 00:15:15.402 } 00:15:15.402 ]' 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:15.402 { 00:15:15.402 "nbd_device": "/dev/nbd0", 00:15:15.402 "bdev_name": "raid" 00:15:15.402 } 00:15:15.402 ]' 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:15.402 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:15:15.403 4096+0 records in 00:15:15.403 4096+0 records out 00:15:15.403 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0227884 s, 92.0 MB/s 00:15:15.403 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:15.661 4096+0 records in 00:15:15.661 4096+0 records out 00:15:15.661 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274544 s, 7.6 MB/s 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:15.661 128+0 records in 00:15:15.661 128+0 records out 00:15:15.661 65536 bytes (66 kB, 64 KiB) copied, 0.00104371 s, 62.8 MB/s 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:15:15.661 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:15.947 2035+0 records in 00:15:15.947 2035+0 records out 00:15:15.947 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0087201 s, 119 MB/s 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:15.947 456+0 records in 00:15:15.947 456+0 records out 00:15:15.947 233472 bytes (233 kB, 228 KiB) copied, 0.00231139 s, 101 MB/s 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.947 [2024-07-25 13:58:04.984799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.947 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:16.206 13:58:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:16.206 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:16.206 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:16.206 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 119562 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 119562 ']' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 119562 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119562 00:15:16.465 killing process with pid 119562 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119562' 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 119562 00:15:16.465 [2024-07-25 13:58:05.299208] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.465 13:58:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 119562 00:15:16.465 [2024-07-25 13:58:05.299318] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.465 [2024-07-25 13:58:05.299372] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.465 [2024-07-25 13:58:05.299383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid, state offline 00:15:16.465 [2024-07-25 13:58:05.441780] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.841 13:58:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:15:17.842 ************************************ 00:15:17.842 END TEST raid_function_test_raid0 00:15:17.842 ************************************ 00:15:17.842 00:15:17.842 real 0m4.424s 00:15:17.842 user 0m5.754s 00:15:17.842 sys 0m0.898s 00:15:17.842 13:58:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.842 13:58:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 13:58:06 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_function_test_concat raid_function_test concat 00:15:17.842 13:58:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:17.842 13:58:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.842 13:58:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 ************************************ 00:15:17.842 START TEST raid_function_test_concat 00:15:17.842 ************************************ 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=119718 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 119718' 00:15:17.842 Process raid pid: 119718 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 119718 /var/tmp/spdk-raid.sock 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 119718 ']' 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.842 13:58:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 [2024-07-25 13:58:06.581551] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:17.842 [2024-07-25 13:58:06.582074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.842 [2024-07-25 13:58:06.753224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.100 [2024-07-25 13:58:06.965422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.358 [2024-07-25 13:58:07.162793] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:15:18.616 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:18.875 [2024-07-25 13:58:07.849096] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:18.875 [2024-07-25 13:58:07.851408] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:18.875 [2024-07-25 13:58:07.851656] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:18.875 [2024-07-25 13:58:07.851776] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:18.875 [2024-07-25 13:58:07.851955] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:18.875 [2024-07-25 13:58:07.852391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:18.875 [2024-07-25 13:58:07.852524] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000012a00 00:15:18.875 [2024-07-25 13:58:07.852799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.875 Base_1 00:15:18.875 Base_2 00:15:18.875 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:18.875 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:18.875 13:58:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.133 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:19.391 [2024-07-25 13:58:08.317254] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:19.391 /dev/nbd0 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.391 1+0 records in 00:15:19.391 1+0 records out 00:15:19.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728378 s, 5.6 MB/s 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:19.391 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:19.691 { 00:15:19.691 "nbd_device": "/dev/nbd0", 00:15:19.691 "bdev_name": "raid" 00:15:19.691 } 00:15:19.691 ]' 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:19.691 { 00:15:19.691 "nbd_device": "/dev/nbd0", 00:15:19.691 "bdev_name": "raid" 00:15:19.691 } 00:15:19.691 ]' 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:15:19.691 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:15:19.949 4096+0 records in 00:15:19.949 4096+0 records out 00:15:19.949 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0299052 s, 70.1 MB/s 00:15:19.949 13:58:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:20.207 4096+0 records in 00:15:20.207 4096+0 records out 00:15:20.207 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.278577 s, 7.5 MB/s 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:20.207 128+0 records in 00:15:20.207 128+0 records out 00:15:20.207 65536 bytes (66 kB, 64 KiB) copied, 0.00112945 s, 58.0 MB/s 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:20.207 2035+0 records in 00:15:20.207 2035+0 records out 00:15:20.207 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00567274 s, 184 MB/s 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:20.207 456+0 records in 00:15:20.207 456+0 records out 00:15:20.207 233472 bytes (233 kB, 228 KiB) copied, 0.00180966 s, 129 MB/s 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.207 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.465 [2024-07-25 13:58:09.356656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:20.465 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 119718 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 119718 ']' 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 119718 00:15:20.722 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119718 00:15:20.723 killing process with pid 119718 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119718' 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 119718 00:15:20.723 [2024-07-25 13:58:09.671024] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.723 13:58:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 119718 00:15:20.723 [2024-07-25 13:58:09.671127] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.723 [2024-07-25 13:58:09.671183] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.723 [2024-07-25 13:58:09.671195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid, state offline 00:15:20.981 [2024-07-25 13:58:09.816692] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.915 13:58:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:15:21.915 00:15:21.915 real 0m4.349s 00:15:21.915 user 0m5.553s 00:15:21.915 ************************************ 00:15:21.915 END TEST raid_function_test_concat 00:15:21.915 ************************************ 00:15:21.915 sys 0m0.936s 00:15:21.915 13:58:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.915 13:58:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 13:58:10 bdev_raid -- bdev/bdev_raid.sh@1016 -- # run_test raid0_resize_test raid_resize_test 0 00:15:21.915 13:58:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.915 13:58:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.915 13:58:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.915 ************************************ 00:15:21.915 START TEST raid0_resize_test 00:15:21.915 ************************************ 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=119876 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 119876' 00:15:21.915 Process raid pid: 119876 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 119876 /var/tmp/spdk-raid.sock 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 119876 ']' 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.915 13:58:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.172 [2024-07-25 13:58:10.984833] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:22.172 [2024-07-25 13:58:10.985245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.172 [2024-07-25 13:58:11.148157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.430 [2024-07-25 13:58:11.344730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.687 [2024-07-25 13:58:11.537175] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.944 13:58:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.944 13:58:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:15:22.944 13:58:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:23.202 Base_1 00:15:23.202 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:23.459 Base_2 00:15:23.459 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:15:23.459 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:23.717 [2024-07-25 13:58:12.674751] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:23.717 [2024-07-25 13:58:12.677141] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:23.717 [2024-07-25 13:58:12.677371] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:23.717 [2024-07-25 13:58:12.677499] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:23.717 [2024-07-25 13:58:12.677710] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:23.717 [2024-07-25 13:58:12.678134] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:23.717 [2024-07-25 13:58:12.678297] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:15:23.717 [2024-07-25 13:58:12.678632] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.717 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:23.975 [2024-07-25 13:58:12.906821] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:23.975 [2024-07-25 13:58:12.907099] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:23.975 true 00:15:23.975 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:23.975 13:58:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:15:24.233 [2024-07-25 13:58:13.178995] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:15:24.233 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:24.490 [2024-07-25 13:58:13.410907] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:24.490 [2024-07-25 13:58:13.411155] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:24.490 [2024-07-25 13:58:13.411549] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:15:24.490 true 00:15:24.490 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:24.490 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:15:24.746 [2024-07-25 13:58:13.679096] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 119876 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 119876 ']' 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 119876 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119876 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119876' 00:15:24.746 killing process with pid 119876 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 119876 00:15:24.746 [2024-07-25 13:58:13.720914] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.746 13:58:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 119876 00:15:24.746 [2024-07-25 13:58:13.721160] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.746 [2024-07-25 13:58:13.721347] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.746 [2024-07-25 13:58:13.721457] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:24.746 [2024-07-25 13:58:13.722211] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.119 ************************************ 00:15:26.119 END TEST raid0_resize_test 00:15:26.119 ************************************ 00:15:26.119 13:58:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:15:26.119 00:15:26.119 real 0m3.902s 00:15:26.119 user 0m5.658s 00:15:26.119 sys 0m0.484s 00:15:26.119 13:58:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.119 13:58:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.119 13:58:14 bdev_raid -- bdev/bdev_raid.sh@1017 -- # run_test raid1_resize_test raid_resize_test 1 00:15:26.119 13:58:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.119 13:58:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.119 13:58:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.119 ************************************ 00:15:26.119 START TEST raid1_resize_test 00:15:26.119 ************************************ 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=119970 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 119970' 00:15:26.119 Process raid pid: 119970 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 119970 /var/tmp/spdk-raid.sock 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 119970 ']' 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.119 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.120 13:58:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.120 [2024-07-25 13:58:14.939609] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:26.120 [2024-07-25 13:58:14.940024] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.120 [2024-07-25 13:58:15.109695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.378 [2024-07-25 13:58:15.300064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.638 [2024-07-25 13:58:15.486986] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.896 13:58:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.896 13:58:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:15:26.896 13:58:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:27.154 Base_1 00:15:27.154 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:27.412 Base_2 00:15:27.412 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:15:27.412 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:15:27.670 [2024-07-25 13:58:16.692646] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:27.670 [2024-07-25 13:58:16.695198] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:27.670 [2024-07-25 13:58:16.695431] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:27.670 [2024-07-25 13:58:16.695558] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:27.670 [2024-07-25 13:58:16.695840] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:27.670 [2024-07-25 13:58:16.696337] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:27.670 [2024-07-25 13:58:16.696463] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:15:27.670 [2024-07-25 13:58:16.696779] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.670 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:27.927 [2024-07-25 13:58:16.924883] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:27.927 [2024-07-25 13:58:16.925198] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:27.927 true 00:15:27.927 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:27.927 13:58:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:15:28.184 [2024-07-25 13:58:17.161050] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:15:28.184 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:28.443 [2024-07-25 13:58:17.429036] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:28.443 [2024-07-25 13:58:17.429365] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:28.443 [2024-07-25 13:58:17.429739] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:15:28.443 true 00:15:28.443 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:15:28.443 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:28.701 [2024-07-25 13:58:17.657182] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 119970 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 119970 ']' 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 119970 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119970 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119970' 00:15:28.701 killing process with pid 119970 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 119970 00:15:28.701 [2024-07-25 13:58:17.703378] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.701 13:58:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 119970 00:15:28.701 [2024-07-25 13:58:17.703644] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.701 [2024-07-25 13:58:17.704250] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.701 [2024-07-25 13:58:17.704378] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:28.701 [2024-07-25 13:58:17.704608] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.102 ************************************ 00:15:30.102 END TEST raid1_resize_test 00:15:30.102 ************************************ 00:15:30.102 13:58:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:15:30.102 00:15:30.102 real 0m3.908s 00:15:30.102 user 0m5.703s 00:15:30.102 sys 0m0.477s 00:15:30.102 13:58:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.102 13:58:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.102 13:58:18 bdev_raid -- bdev/bdev_raid.sh@1019 -- # for n in {2..4} 00:15:30.102 13:58:18 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:15:30.102 13:58:18 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:30.102 13:58:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:30.102 13:58:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.102 13:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.102 ************************************ 00:15:30.102 START TEST raid_state_function_test 00:15:30.102 ************************************ 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=120060 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120060' 00:15:30.102 Process raid pid: 120060 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 120060 /var/tmp/spdk-raid.sock 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 120060 ']' 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.102 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.103 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.103 13:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.103 [2024-07-25 13:58:18.910545] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:30.103 [2024-07-25 13:58:18.910964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.103 [2024-07-25 13:58:19.081842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.361 [2024-07-25 13:58:19.285771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.619 [2024-07-25 13:58:19.483136] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.877 13:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.877 13:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:30.877 13:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.136 [2024-07-25 13:58:20.136005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.136 [2024-07-25 13:58:20.136416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.136 [2024-07-25 13:58:20.136548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.136 [2024-07-25 13:58:20.136622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.136 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.394 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.394 "name": "Existed_Raid", 00:15:31.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.394 "strip_size_kb": 64, 00:15:31.394 "state": "configuring", 00:15:31.394 "raid_level": "raid0", 00:15:31.394 "superblock": false, 00:15:31.394 "num_base_bdevs": 2, 00:15:31.394 "num_base_bdevs_discovered": 0, 00:15:31.395 "num_base_bdevs_operational": 2, 00:15:31.395 "base_bdevs_list": [ 00:15:31.395 { 00:15:31.395 "name": "BaseBdev1", 00:15:31.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.395 "is_configured": false, 00:15:31.395 "data_offset": 0, 00:15:31.395 "data_size": 0 00:15:31.395 }, 00:15:31.395 { 00:15:31.395 "name": "BaseBdev2", 00:15:31.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.395 "is_configured": false, 00:15:31.395 "data_offset": 0, 00:15:31.395 "data_size": 0 00:15:31.395 } 00:15:31.395 ] 00:15:31.395 }' 00:15:31.395 13:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.395 13:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 13:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.331 [2024-07-25 13:58:21.292126] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.331 [2024-07-25 13:58:21.292350] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:15:32.331 13:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:32.589 [2024-07-25 13:58:21.548240] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.589 [2024-07-25 13:58:21.548567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.589 [2024-07-25 13:58:21.548685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.589 [2024-07-25 13:58:21.548754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.589 13:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.848 [2024-07-25 13:58:21.827566] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.848 BaseBdev1 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.848 13:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.106 13:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.405 [ 00:15:33.405 { 00:15:33.405 "name": "BaseBdev1", 00:15:33.405 "aliases": [ 00:15:33.405 "c08cec94-ffb3-46a7-ba4b-11cc1e830d58" 00:15:33.405 ], 00:15:33.405 "product_name": "Malloc disk", 00:15:33.405 "block_size": 512, 00:15:33.405 "num_blocks": 65536, 00:15:33.405 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:33.405 "assigned_rate_limits": { 00:15:33.405 "rw_ios_per_sec": 0, 00:15:33.405 "rw_mbytes_per_sec": 0, 00:15:33.405 "r_mbytes_per_sec": 0, 00:15:33.405 "w_mbytes_per_sec": 0 00:15:33.405 }, 00:15:33.405 "claimed": true, 00:15:33.405 "claim_type": "exclusive_write", 00:15:33.405 "zoned": false, 00:15:33.405 "supported_io_types": { 00:15:33.405 "read": true, 00:15:33.405 "write": true, 00:15:33.405 "unmap": true, 00:15:33.405 "flush": true, 00:15:33.405 "reset": true, 00:15:33.405 "nvme_admin": false, 00:15:33.405 "nvme_io": false, 00:15:33.405 "nvme_io_md": false, 00:15:33.405 "write_zeroes": true, 00:15:33.405 "zcopy": true, 00:15:33.405 "get_zone_info": false, 00:15:33.405 "zone_management": false, 00:15:33.405 "zone_append": false, 00:15:33.405 "compare": false, 00:15:33.405 "compare_and_write": false, 00:15:33.405 "abort": true, 00:15:33.405 "seek_hole": false, 00:15:33.405 "seek_data": false, 00:15:33.405 "copy": true, 00:15:33.405 "nvme_iov_md": false 00:15:33.405 }, 00:15:33.405 "memory_domains": [ 00:15:33.405 { 00:15:33.405 "dma_device_id": "system", 00:15:33.405 "dma_device_type": 1 00:15:33.405 }, 00:15:33.405 { 00:15:33.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.405 "dma_device_type": 2 00:15:33.405 } 00:15:33.405 ], 00:15:33.405 "driver_specific": {} 00:15:33.405 } 00:15:33.405 ] 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.405 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.663 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.663 "name": "Existed_Raid", 00:15:33.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.663 "strip_size_kb": 64, 00:15:33.663 "state": "configuring", 00:15:33.663 "raid_level": "raid0", 00:15:33.663 "superblock": false, 00:15:33.663 "num_base_bdevs": 2, 00:15:33.663 "num_base_bdevs_discovered": 1, 00:15:33.663 "num_base_bdevs_operational": 2, 00:15:33.663 "base_bdevs_list": [ 00:15:33.663 { 00:15:33.663 "name": "BaseBdev1", 00:15:33.663 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:33.663 "is_configured": true, 00:15:33.663 "data_offset": 0, 00:15:33.663 "data_size": 65536 00:15:33.663 }, 00:15:33.663 { 00:15:33.663 "name": "BaseBdev2", 00:15:33.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.663 "is_configured": false, 00:15:33.663 "data_offset": 0, 00:15:33.663 "data_size": 0 00:15:33.663 } 00:15:33.663 ] 00:15:33.663 }' 00:15:33.663 13:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.663 13:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.230 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.490 [2024-07-25 13:58:23.516093] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.490 [2024-07-25 13:58:23.516438] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:15:34.490 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:34.752 [2024-07-25 13:58:23.772135] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.752 [2024-07-25 13:58:23.774449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.752 [2024-07-25 13:58:23.774663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.752 13:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.011 13:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.011 "name": "Existed_Raid", 00:15:35.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.011 "strip_size_kb": 64, 00:15:35.011 "state": "configuring", 00:15:35.011 "raid_level": "raid0", 00:15:35.011 "superblock": false, 00:15:35.011 "num_base_bdevs": 2, 00:15:35.011 "num_base_bdevs_discovered": 1, 00:15:35.011 "num_base_bdevs_operational": 2, 00:15:35.011 "base_bdevs_list": [ 00:15:35.011 { 00:15:35.011 "name": "BaseBdev1", 00:15:35.011 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:35.011 "is_configured": true, 00:15:35.011 "data_offset": 0, 00:15:35.011 "data_size": 65536 00:15:35.011 }, 00:15:35.011 { 00:15:35.011 "name": "BaseBdev2", 00:15:35.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.011 "is_configured": false, 00:15:35.011 "data_offset": 0, 00:15:35.011 "data_size": 0 00:15:35.011 } 00:15:35.011 ] 00:15:35.011 }' 00:15:35.011 13:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.011 13:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.946 13:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.204 [2024-07-25 13:58:25.008698] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.204 [2024-07-25 13:58:25.009066] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:36.204 [2024-07-25 13:58:25.009116] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:36.204 [2024-07-25 13:58:25.009366] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:36.204 [2024-07-25 13:58:25.009897] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:36.204 [2024-07-25 13:58:25.010069] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:15:36.204 [2024-07-25 13:58:25.010495] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.204 BaseBdev2 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:36.204 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.462 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.720 [ 00:15:36.720 { 00:15:36.720 "name": "BaseBdev2", 00:15:36.720 "aliases": [ 00:15:36.720 "97af4a5c-4476-4599-a99f-45b3923702ac" 00:15:36.720 ], 00:15:36.720 "product_name": "Malloc disk", 00:15:36.720 "block_size": 512, 00:15:36.720 "num_blocks": 65536, 00:15:36.720 "uuid": "97af4a5c-4476-4599-a99f-45b3923702ac", 00:15:36.720 "assigned_rate_limits": { 00:15:36.720 "rw_ios_per_sec": 0, 00:15:36.720 "rw_mbytes_per_sec": 0, 00:15:36.720 "r_mbytes_per_sec": 0, 00:15:36.720 "w_mbytes_per_sec": 0 00:15:36.720 }, 00:15:36.720 "claimed": true, 00:15:36.720 "claim_type": "exclusive_write", 00:15:36.720 "zoned": false, 00:15:36.720 "supported_io_types": { 00:15:36.720 "read": true, 00:15:36.720 "write": true, 00:15:36.720 "unmap": true, 00:15:36.720 "flush": true, 00:15:36.720 "reset": true, 00:15:36.720 "nvme_admin": false, 00:15:36.720 "nvme_io": false, 00:15:36.720 "nvme_io_md": false, 00:15:36.720 "write_zeroes": true, 00:15:36.720 "zcopy": true, 00:15:36.720 "get_zone_info": false, 00:15:36.720 "zone_management": false, 00:15:36.720 "zone_append": false, 00:15:36.720 "compare": false, 00:15:36.720 "compare_and_write": false, 00:15:36.720 "abort": true, 00:15:36.720 "seek_hole": false, 00:15:36.720 "seek_data": false, 00:15:36.720 "copy": true, 00:15:36.720 "nvme_iov_md": false 00:15:36.720 }, 00:15:36.720 "memory_domains": [ 00:15:36.720 { 00:15:36.720 "dma_device_id": "system", 00:15:36.720 "dma_device_type": 1 00:15:36.720 }, 00:15:36.720 { 00:15:36.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.720 "dma_device_type": 2 00:15:36.720 } 00:15:36.720 ], 00:15:36.720 "driver_specific": {} 00:15:36.720 } 00:15:36.720 ] 00:15:36.720 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:36.720 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:36.720 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:36.720 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.721 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.979 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.979 "name": "Existed_Raid", 00:15:36.979 "uuid": "cb9199b8-52a0-4570-b800-f6668d57bf93", 00:15:36.979 "strip_size_kb": 64, 00:15:36.979 "state": "online", 00:15:36.979 "raid_level": "raid0", 00:15:36.979 "superblock": false, 00:15:36.979 "num_base_bdevs": 2, 00:15:36.979 "num_base_bdevs_discovered": 2, 00:15:36.979 "num_base_bdevs_operational": 2, 00:15:36.979 "base_bdevs_list": [ 00:15:36.979 { 00:15:36.979 "name": "BaseBdev1", 00:15:36.979 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:36.979 "is_configured": true, 00:15:36.979 "data_offset": 0, 00:15:36.979 "data_size": 65536 00:15:36.979 }, 00:15:36.979 { 00:15:36.979 "name": "BaseBdev2", 00:15:36.979 "uuid": "97af4a5c-4476-4599-a99f-45b3923702ac", 00:15:36.979 "is_configured": true, 00:15:36.979 "data_offset": 0, 00:15:36.979 "data_size": 65536 00:15:36.979 } 00:15:36.979 ] 00:15:36.979 }' 00:15:36.979 13:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.979 13:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:37.546 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.804 [2024-07-25 13:58:26.809542] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.804 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.804 "name": "Existed_Raid", 00:15:37.804 "aliases": [ 00:15:37.804 "cb9199b8-52a0-4570-b800-f6668d57bf93" 00:15:37.804 ], 00:15:37.804 "product_name": "Raid Volume", 00:15:37.804 "block_size": 512, 00:15:37.804 "num_blocks": 131072, 00:15:37.804 "uuid": "cb9199b8-52a0-4570-b800-f6668d57bf93", 00:15:37.804 "assigned_rate_limits": { 00:15:37.804 "rw_ios_per_sec": 0, 00:15:37.804 "rw_mbytes_per_sec": 0, 00:15:37.804 "r_mbytes_per_sec": 0, 00:15:37.804 "w_mbytes_per_sec": 0 00:15:37.804 }, 00:15:37.804 "claimed": false, 00:15:37.804 "zoned": false, 00:15:37.804 "supported_io_types": { 00:15:37.804 "read": true, 00:15:37.804 "write": true, 00:15:37.804 "unmap": true, 00:15:37.804 "flush": true, 00:15:37.805 "reset": true, 00:15:37.805 "nvme_admin": false, 00:15:37.805 "nvme_io": false, 00:15:37.805 "nvme_io_md": false, 00:15:37.805 "write_zeroes": true, 00:15:37.805 "zcopy": false, 00:15:37.805 "get_zone_info": false, 00:15:37.805 "zone_management": false, 00:15:37.805 "zone_append": false, 00:15:37.805 "compare": false, 00:15:37.805 "compare_and_write": false, 00:15:37.805 "abort": false, 00:15:37.805 "seek_hole": false, 00:15:37.805 "seek_data": false, 00:15:37.805 "copy": false, 00:15:37.805 "nvme_iov_md": false 00:15:37.805 }, 00:15:37.805 "memory_domains": [ 00:15:37.805 { 00:15:37.805 "dma_device_id": "system", 00:15:37.805 "dma_device_type": 1 00:15:37.805 }, 00:15:37.805 { 00:15:37.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.805 "dma_device_type": 2 00:15:37.805 }, 00:15:37.805 { 00:15:37.805 "dma_device_id": "system", 00:15:37.805 "dma_device_type": 1 00:15:37.805 }, 00:15:37.805 { 00:15:37.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.805 "dma_device_type": 2 00:15:37.805 } 00:15:37.805 ], 00:15:37.805 "driver_specific": { 00:15:37.805 "raid": { 00:15:37.805 "uuid": "cb9199b8-52a0-4570-b800-f6668d57bf93", 00:15:37.805 "strip_size_kb": 64, 00:15:37.805 "state": "online", 00:15:37.805 "raid_level": "raid0", 00:15:37.805 "superblock": false, 00:15:37.805 "num_base_bdevs": 2, 00:15:37.805 "num_base_bdevs_discovered": 2, 00:15:37.805 "num_base_bdevs_operational": 2, 00:15:37.805 "base_bdevs_list": [ 00:15:37.805 { 00:15:37.805 "name": "BaseBdev1", 00:15:37.805 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:37.805 "is_configured": true, 00:15:37.805 "data_offset": 0, 00:15:37.805 "data_size": 65536 00:15:37.805 }, 00:15:37.805 { 00:15:37.805 "name": "BaseBdev2", 00:15:37.805 "uuid": "97af4a5c-4476-4599-a99f-45b3923702ac", 00:15:37.805 "is_configured": true, 00:15:37.805 "data_offset": 0, 00:15:37.805 "data_size": 65536 00:15:37.805 } 00:15:37.805 ] 00:15:37.805 } 00:15:37.805 } 00:15:37.805 }' 00:15:37.805 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.063 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:38.063 BaseBdev2' 00:15:38.063 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.063 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:38.063 13:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.321 "name": "BaseBdev1", 00:15:38.321 "aliases": [ 00:15:38.321 "c08cec94-ffb3-46a7-ba4b-11cc1e830d58" 00:15:38.321 ], 00:15:38.321 "product_name": "Malloc disk", 00:15:38.321 "block_size": 512, 00:15:38.321 "num_blocks": 65536, 00:15:38.321 "uuid": "c08cec94-ffb3-46a7-ba4b-11cc1e830d58", 00:15:38.321 "assigned_rate_limits": { 00:15:38.321 "rw_ios_per_sec": 0, 00:15:38.321 "rw_mbytes_per_sec": 0, 00:15:38.321 "r_mbytes_per_sec": 0, 00:15:38.321 "w_mbytes_per_sec": 0 00:15:38.321 }, 00:15:38.321 "claimed": true, 00:15:38.321 "claim_type": "exclusive_write", 00:15:38.321 "zoned": false, 00:15:38.321 "supported_io_types": { 00:15:38.321 "read": true, 00:15:38.321 "write": true, 00:15:38.321 "unmap": true, 00:15:38.321 "flush": true, 00:15:38.321 "reset": true, 00:15:38.321 "nvme_admin": false, 00:15:38.321 "nvme_io": false, 00:15:38.321 "nvme_io_md": false, 00:15:38.321 "write_zeroes": true, 00:15:38.321 "zcopy": true, 00:15:38.321 "get_zone_info": false, 00:15:38.321 "zone_management": false, 00:15:38.321 "zone_append": false, 00:15:38.321 "compare": false, 00:15:38.321 "compare_and_write": false, 00:15:38.321 "abort": true, 00:15:38.321 "seek_hole": false, 00:15:38.321 "seek_data": false, 00:15:38.321 "copy": true, 00:15:38.321 "nvme_iov_md": false 00:15:38.321 }, 00:15:38.321 "memory_domains": [ 00:15:38.321 { 00:15:38.321 "dma_device_id": "system", 00:15:38.321 "dma_device_type": 1 00:15:38.321 }, 00:15:38.321 { 00:15:38.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.321 "dma_device_type": 2 00:15:38.321 } 00:15:38.321 ], 00:15:38.321 "driver_specific": {} 00:15:38.321 }' 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.321 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:38.579 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.838 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.838 "name": "BaseBdev2", 00:15:38.838 "aliases": [ 00:15:38.838 "97af4a5c-4476-4599-a99f-45b3923702ac" 00:15:38.838 ], 00:15:38.838 "product_name": "Malloc disk", 00:15:38.838 "block_size": 512, 00:15:38.838 "num_blocks": 65536, 00:15:38.838 "uuid": "97af4a5c-4476-4599-a99f-45b3923702ac", 00:15:38.838 "assigned_rate_limits": { 00:15:38.838 "rw_ios_per_sec": 0, 00:15:38.838 "rw_mbytes_per_sec": 0, 00:15:38.838 "r_mbytes_per_sec": 0, 00:15:38.838 "w_mbytes_per_sec": 0 00:15:38.838 }, 00:15:38.838 "claimed": true, 00:15:38.838 "claim_type": "exclusive_write", 00:15:38.838 "zoned": false, 00:15:38.838 "supported_io_types": { 00:15:38.838 "read": true, 00:15:38.838 "write": true, 00:15:38.838 "unmap": true, 00:15:38.838 "flush": true, 00:15:38.838 "reset": true, 00:15:38.838 "nvme_admin": false, 00:15:38.838 "nvme_io": false, 00:15:38.838 "nvme_io_md": false, 00:15:38.838 "write_zeroes": true, 00:15:38.838 "zcopy": true, 00:15:38.838 "get_zone_info": false, 00:15:38.838 "zone_management": false, 00:15:38.838 "zone_append": false, 00:15:38.838 "compare": false, 00:15:38.838 "compare_and_write": false, 00:15:38.838 "abort": true, 00:15:38.838 "seek_hole": false, 00:15:38.838 "seek_data": false, 00:15:38.838 "copy": true, 00:15:38.838 "nvme_iov_md": false 00:15:38.838 }, 00:15:38.838 "memory_domains": [ 00:15:38.838 { 00:15:38.838 "dma_device_id": "system", 00:15:38.838 "dma_device_type": 1 00:15:38.838 }, 00:15:38.838 { 00:15:38.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.838 "dma_device_type": 2 00:15:38.838 } 00:15:38.838 ], 00:15:38.839 "driver_specific": {} 00:15:38.839 }' 00:15:38.839 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.839 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.839 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.839 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.097 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.097 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.097 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.097 13:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.097 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.097 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.097 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.097 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.097 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.356 [2024-07-25 13:58:28.349773] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.356 [2024-07-25 13:58:28.349999] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.356 [2024-07-25 13:58:28.350187] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.614 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.873 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.873 "name": "Existed_Raid", 00:15:39.873 "uuid": "cb9199b8-52a0-4570-b800-f6668d57bf93", 00:15:39.873 "strip_size_kb": 64, 00:15:39.873 "state": "offline", 00:15:39.873 "raid_level": "raid0", 00:15:39.873 "superblock": false, 00:15:39.873 "num_base_bdevs": 2, 00:15:39.873 "num_base_bdevs_discovered": 1, 00:15:39.873 "num_base_bdevs_operational": 1, 00:15:39.873 "base_bdevs_list": [ 00:15:39.873 { 00:15:39.873 "name": null, 00:15:39.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.873 "is_configured": false, 00:15:39.873 "data_offset": 0, 00:15:39.873 "data_size": 65536 00:15:39.873 }, 00:15:39.873 { 00:15:39.873 "name": "BaseBdev2", 00:15:39.873 "uuid": "97af4a5c-4476-4599-a99f-45b3923702ac", 00:15:39.873 "is_configured": true, 00:15:39.873 "data_offset": 0, 00:15:39.873 "data_size": 65536 00:15:39.873 } 00:15:39.873 ] 00:15:39.873 }' 00:15:39.873 13:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.873 13:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.465 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:40.465 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.465 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.465 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.724 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.724 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.724 13:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:40.982 [2024-07-25 13:58:29.944536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.982 [2024-07-25 13:58:29.944824] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:15:41.239 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:41.239 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:41.239 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.239 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 120060 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 120060 ']' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 120060 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120060 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120060' 00:15:41.498 killing process with pid 120060 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 120060 00:15:41.498 [2024-07-25 13:58:30.363551] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.498 13:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 120060 00:15:41.498 [2024-07-25 13:58:30.363809] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.431 ************************************ 00:15:42.431 END TEST raid_state_function_test 00:15:42.431 ************************************ 00:15:42.431 13:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:42.431 00:15:42.431 real 0m12.617s 00:15:42.431 user 0m22.321s 00:15:42.431 sys 0m1.476s 00:15:42.431 13:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.431 13:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.689 13:58:31 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:42.689 13:58:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:42.689 13:58:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.689 13:58:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.689 ************************************ 00:15:42.689 START TEST raid_state_function_test_sb 00:15:42.689 ************************************ 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.689 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=120455 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:42.690 Process raid pid: 120455 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120455' 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 120455 /var/tmp/spdk-raid.sock 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 120455 ']' 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.690 13:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.690 [2024-07-25 13:58:31.585211] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:42.690 [2024-07-25 13:58:31.585681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.947 [2024-07-25 13:58:31.757775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.947 [2024-07-25 13:58:31.976315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.206 [2024-07-25 13:58:32.183801] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.771 13:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.771 13:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:43.771 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:44.029 [2024-07-25 13:58:32.818239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.029 [2024-07-25 13:58:32.818597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.029 [2024-07-25 13:58:32.818720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.029 [2024-07-25 13:58:32.818825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.029 13:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.288 13:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.288 "name": "Existed_Raid", 00:15:44.288 "uuid": "3fb40737-eb13-42c0-8875-cc0f0aa1632f", 00:15:44.288 "strip_size_kb": 64, 00:15:44.288 "state": "configuring", 00:15:44.288 "raid_level": "raid0", 00:15:44.288 "superblock": true, 00:15:44.288 "num_base_bdevs": 2, 00:15:44.288 "num_base_bdevs_discovered": 0, 00:15:44.288 "num_base_bdevs_operational": 2, 00:15:44.288 "base_bdevs_list": [ 00:15:44.288 { 00:15:44.288 "name": "BaseBdev1", 00:15:44.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.288 "is_configured": false, 00:15:44.288 "data_offset": 0, 00:15:44.288 "data_size": 0 00:15:44.288 }, 00:15:44.288 { 00:15:44.288 "name": "BaseBdev2", 00:15:44.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.288 "is_configured": false, 00:15:44.288 "data_offset": 0, 00:15:44.288 "data_size": 0 00:15:44.288 } 00:15:44.288 ] 00:15:44.288 }' 00:15:44.288 13:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.288 13:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.852 13:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:45.110 [2024-07-25 13:58:34.054439] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.110 [2024-07-25 13:58:34.054687] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:15:45.110 13:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:45.369 [2024-07-25 13:58:34.346567] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.369 [2024-07-25 13:58:34.346794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.369 [2024-07-25 13:58:34.346909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.369 [2024-07-25 13:58:34.346991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.369 13:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.628 [2024-07-25 13:58:34.632809] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.628 BaseBdev1 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.628 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.886 13:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.144 [ 00:15:46.144 { 00:15:46.144 "name": "BaseBdev1", 00:15:46.144 "aliases": [ 00:15:46.144 "baba72d5-5afc-4e71-824c-ceb0e76feea7" 00:15:46.144 ], 00:15:46.144 "product_name": "Malloc disk", 00:15:46.144 "block_size": 512, 00:15:46.144 "num_blocks": 65536, 00:15:46.144 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:46.144 "assigned_rate_limits": { 00:15:46.144 "rw_ios_per_sec": 0, 00:15:46.144 "rw_mbytes_per_sec": 0, 00:15:46.144 "r_mbytes_per_sec": 0, 00:15:46.144 "w_mbytes_per_sec": 0 00:15:46.144 }, 00:15:46.144 "claimed": true, 00:15:46.144 "claim_type": "exclusive_write", 00:15:46.144 "zoned": false, 00:15:46.144 "supported_io_types": { 00:15:46.144 "read": true, 00:15:46.144 "write": true, 00:15:46.144 "unmap": true, 00:15:46.144 "flush": true, 00:15:46.144 "reset": true, 00:15:46.144 "nvme_admin": false, 00:15:46.144 "nvme_io": false, 00:15:46.144 "nvme_io_md": false, 00:15:46.144 "write_zeroes": true, 00:15:46.144 "zcopy": true, 00:15:46.144 "get_zone_info": false, 00:15:46.144 "zone_management": false, 00:15:46.144 "zone_append": false, 00:15:46.144 "compare": false, 00:15:46.144 "compare_and_write": false, 00:15:46.144 "abort": true, 00:15:46.144 "seek_hole": false, 00:15:46.144 "seek_data": false, 00:15:46.144 "copy": true, 00:15:46.144 "nvme_iov_md": false 00:15:46.144 }, 00:15:46.144 "memory_domains": [ 00:15:46.144 { 00:15:46.144 "dma_device_id": "system", 00:15:46.144 "dma_device_type": 1 00:15:46.144 }, 00:15:46.144 { 00:15:46.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.144 "dma_device_type": 2 00:15:46.144 } 00:15:46.144 ], 00:15:46.144 "driver_specific": {} 00:15:46.144 } 00:15:46.144 ] 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.403 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.663 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.663 "name": "Existed_Raid", 00:15:46.663 "uuid": "8c228390-272e-4618-9d17-844f3c192894", 00:15:46.663 "strip_size_kb": 64, 00:15:46.663 "state": "configuring", 00:15:46.663 "raid_level": "raid0", 00:15:46.663 "superblock": true, 00:15:46.663 "num_base_bdevs": 2, 00:15:46.663 "num_base_bdevs_discovered": 1, 00:15:46.663 "num_base_bdevs_operational": 2, 00:15:46.663 "base_bdevs_list": [ 00:15:46.663 { 00:15:46.663 "name": "BaseBdev1", 00:15:46.663 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:46.663 "is_configured": true, 00:15:46.663 "data_offset": 2048, 00:15:46.663 "data_size": 63488 00:15:46.663 }, 00:15:46.663 { 00:15:46.663 "name": "BaseBdev2", 00:15:46.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.663 "is_configured": false, 00:15:46.663 "data_offset": 0, 00:15:46.663 "data_size": 0 00:15:46.663 } 00:15:46.663 ] 00:15:46.663 }' 00:15:46.663 13:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.663 13:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.230 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:47.488 [2024-07-25 13:58:36.417333] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.488 [2024-07-25 13:58:36.417625] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:15:47.488 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.746 [2024-07-25 13:58:36.705404] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.746 [2024-07-25 13:58:36.707864] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.746 [2024-07-25 13:58:36.708076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.746 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.004 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.004 "name": "Existed_Raid", 00:15:48.004 "uuid": "c28643c1-0ff5-4a89-a93b-eef728af4d10", 00:15:48.004 "strip_size_kb": 64, 00:15:48.004 "state": "configuring", 00:15:48.004 "raid_level": "raid0", 00:15:48.004 "superblock": true, 00:15:48.004 "num_base_bdevs": 2, 00:15:48.005 "num_base_bdevs_discovered": 1, 00:15:48.005 "num_base_bdevs_operational": 2, 00:15:48.005 "base_bdevs_list": [ 00:15:48.005 { 00:15:48.005 "name": "BaseBdev1", 00:15:48.005 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:48.005 "is_configured": true, 00:15:48.005 "data_offset": 2048, 00:15:48.005 "data_size": 63488 00:15:48.005 }, 00:15:48.005 { 00:15:48.005 "name": "BaseBdev2", 00:15:48.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.005 "is_configured": false, 00:15:48.005 "data_offset": 0, 00:15:48.005 "data_size": 0 00:15:48.005 } 00:15:48.005 ] 00:15:48.005 }' 00:15:48.005 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.005 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.940 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.199 [2024-07-25 13:58:38.018745] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.199 [2024-07-25 13:58:38.019357] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:49.199 [2024-07-25 13:58:38.019494] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:49.199 [2024-07-25 13:58:38.019664] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:49.199 [2024-07-25 13:58:38.020081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:49.199 [2024-07-25 13:58:38.020244] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:15:49.199 BaseBdev2 00:15:49.199 [2024-07-25 13:58:38.020544] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:49.199 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.457 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.716 [ 00:15:49.716 { 00:15:49.716 "name": "BaseBdev2", 00:15:49.716 "aliases": [ 00:15:49.716 "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65" 00:15:49.716 ], 00:15:49.716 "product_name": "Malloc disk", 00:15:49.716 "block_size": 512, 00:15:49.716 "num_blocks": 65536, 00:15:49.716 "uuid": "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65", 00:15:49.716 "assigned_rate_limits": { 00:15:49.716 "rw_ios_per_sec": 0, 00:15:49.716 "rw_mbytes_per_sec": 0, 00:15:49.716 "r_mbytes_per_sec": 0, 00:15:49.716 "w_mbytes_per_sec": 0 00:15:49.716 }, 00:15:49.716 "claimed": true, 00:15:49.716 "claim_type": "exclusive_write", 00:15:49.716 "zoned": false, 00:15:49.716 "supported_io_types": { 00:15:49.716 "read": true, 00:15:49.716 "write": true, 00:15:49.716 "unmap": true, 00:15:49.716 "flush": true, 00:15:49.716 "reset": true, 00:15:49.716 "nvme_admin": false, 00:15:49.716 "nvme_io": false, 00:15:49.716 "nvme_io_md": false, 00:15:49.716 "write_zeroes": true, 00:15:49.716 "zcopy": true, 00:15:49.716 "get_zone_info": false, 00:15:49.716 "zone_management": false, 00:15:49.716 "zone_append": false, 00:15:49.716 "compare": false, 00:15:49.716 "compare_and_write": false, 00:15:49.716 "abort": true, 00:15:49.716 "seek_hole": false, 00:15:49.716 "seek_data": false, 00:15:49.716 "copy": true, 00:15:49.716 "nvme_iov_md": false 00:15:49.716 }, 00:15:49.716 "memory_domains": [ 00:15:49.716 { 00:15:49.716 "dma_device_id": "system", 00:15:49.716 "dma_device_type": 1 00:15:49.716 }, 00:15:49.716 { 00:15:49.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.716 "dma_device_type": 2 00:15:49.716 } 00:15:49.716 ], 00:15:49.716 "driver_specific": {} 00:15:49.716 } 00:15:49.716 ] 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:49.716 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.717 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.974 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.974 "name": "Existed_Raid", 00:15:49.974 "uuid": "c28643c1-0ff5-4a89-a93b-eef728af4d10", 00:15:49.974 "strip_size_kb": 64, 00:15:49.974 "state": "online", 00:15:49.974 "raid_level": "raid0", 00:15:49.974 "superblock": true, 00:15:49.974 "num_base_bdevs": 2, 00:15:49.974 "num_base_bdevs_discovered": 2, 00:15:49.974 "num_base_bdevs_operational": 2, 00:15:49.974 "base_bdevs_list": [ 00:15:49.974 { 00:15:49.974 "name": "BaseBdev1", 00:15:49.974 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:49.974 "is_configured": true, 00:15:49.974 "data_offset": 2048, 00:15:49.974 "data_size": 63488 00:15:49.974 }, 00:15:49.974 { 00:15:49.974 "name": "BaseBdev2", 00:15:49.974 "uuid": "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65", 00:15:49.974 "is_configured": true, 00:15:49.974 "data_offset": 2048, 00:15:49.974 "data_size": 63488 00:15:49.974 } 00:15:49.974 ] 00:15:49.974 }' 00:15:49.974 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.974 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.537 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.537 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.538 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:50.795 [2024-07-25 13:58:39.803644] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.795 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:50.795 "name": "Existed_Raid", 00:15:50.795 "aliases": [ 00:15:50.795 "c28643c1-0ff5-4a89-a93b-eef728af4d10" 00:15:50.795 ], 00:15:50.795 "product_name": "Raid Volume", 00:15:50.795 "block_size": 512, 00:15:50.795 "num_blocks": 126976, 00:15:50.795 "uuid": "c28643c1-0ff5-4a89-a93b-eef728af4d10", 00:15:50.795 "assigned_rate_limits": { 00:15:50.795 "rw_ios_per_sec": 0, 00:15:50.795 "rw_mbytes_per_sec": 0, 00:15:50.795 "r_mbytes_per_sec": 0, 00:15:50.795 "w_mbytes_per_sec": 0 00:15:50.795 }, 00:15:50.795 "claimed": false, 00:15:50.795 "zoned": false, 00:15:50.795 "supported_io_types": { 00:15:50.795 "read": true, 00:15:50.795 "write": true, 00:15:50.795 "unmap": true, 00:15:50.795 "flush": true, 00:15:50.795 "reset": true, 00:15:50.795 "nvme_admin": false, 00:15:50.795 "nvme_io": false, 00:15:50.795 "nvme_io_md": false, 00:15:50.795 "write_zeroes": true, 00:15:50.795 "zcopy": false, 00:15:50.795 "get_zone_info": false, 00:15:50.795 "zone_management": false, 00:15:50.795 "zone_append": false, 00:15:50.795 "compare": false, 00:15:50.795 "compare_and_write": false, 00:15:50.795 "abort": false, 00:15:50.795 "seek_hole": false, 00:15:50.795 "seek_data": false, 00:15:50.795 "copy": false, 00:15:50.795 "nvme_iov_md": false 00:15:50.795 }, 00:15:50.795 "memory_domains": [ 00:15:50.795 { 00:15:50.795 "dma_device_id": "system", 00:15:50.795 "dma_device_type": 1 00:15:50.795 }, 00:15:50.795 { 00:15:50.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.795 "dma_device_type": 2 00:15:50.795 }, 00:15:50.795 { 00:15:50.795 "dma_device_id": "system", 00:15:50.795 "dma_device_type": 1 00:15:50.795 }, 00:15:50.795 { 00:15:50.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.795 "dma_device_type": 2 00:15:50.795 } 00:15:50.795 ], 00:15:50.795 "driver_specific": { 00:15:50.795 "raid": { 00:15:50.795 "uuid": "c28643c1-0ff5-4a89-a93b-eef728af4d10", 00:15:50.795 "strip_size_kb": 64, 00:15:50.795 "state": "online", 00:15:50.795 "raid_level": "raid0", 00:15:50.795 "superblock": true, 00:15:50.795 "num_base_bdevs": 2, 00:15:50.795 "num_base_bdevs_discovered": 2, 00:15:50.795 "num_base_bdevs_operational": 2, 00:15:50.795 "base_bdevs_list": [ 00:15:50.795 { 00:15:50.795 "name": "BaseBdev1", 00:15:50.795 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:50.795 "is_configured": true, 00:15:50.795 "data_offset": 2048, 00:15:50.795 "data_size": 63488 00:15:50.795 }, 00:15:50.795 { 00:15:50.795 "name": "BaseBdev2", 00:15:50.795 "uuid": "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65", 00:15:50.795 "is_configured": true, 00:15:50.795 "data_offset": 2048, 00:15:50.795 "data_size": 63488 00:15:50.795 } 00:15:50.795 ] 00:15:50.795 } 00:15:50.795 } 00:15:50.795 }' 00:15:50.795 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.053 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:51.053 BaseBdev2' 00:15:51.053 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.053 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:51.053 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.311 "name": "BaseBdev1", 00:15:51.311 "aliases": [ 00:15:51.311 "baba72d5-5afc-4e71-824c-ceb0e76feea7" 00:15:51.311 ], 00:15:51.311 "product_name": "Malloc disk", 00:15:51.311 "block_size": 512, 00:15:51.311 "num_blocks": 65536, 00:15:51.311 "uuid": "baba72d5-5afc-4e71-824c-ceb0e76feea7", 00:15:51.311 "assigned_rate_limits": { 00:15:51.311 "rw_ios_per_sec": 0, 00:15:51.311 "rw_mbytes_per_sec": 0, 00:15:51.311 "r_mbytes_per_sec": 0, 00:15:51.311 "w_mbytes_per_sec": 0 00:15:51.311 }, 00:15:51.311 "claimed": true, 00:15:51.311 "claim_type": "exclusive_write", 00:15:51.311 "zoned": false, 00:15:51.311 "supported_io_types": { 00:15:51.311 "read": true, 00:15:51.311 "write": true, 00:15:51.311 "unmap": true, 00:15:51.311 "flush": true, 00:15:51.311 "reset": true, 00:15:51.311 "nvme_admin": false, 00:15:51.311 "nvme_io": false, 00:15:51.311 "nvme_io_md": false, 00:15:51.311 "write_zeroes": true, 00:15:51.311 "zcopy": true, 00:15:51.311 "get_zone_info": false, 00:15:51.311 "zone_management": false, 00:15:51.311 "zone_append": false, 00:15:51.311 "compare": false, 00:15:51.311 "compare_and_write": false, 00:15:51.311 "abort": true, 00:15:51.311 "seek_hole": false, 00:15:51.311 "seek_data": false, 00:15:51.311 "copy": true, 00:15:51.311 "nvme_iov_md": false 00:15:51.311 }, 00:15:51.311 "memory_domains": [ 00:15:51.311 { 00:15:51.311 "dma_device_id": "system", 00:15:51.311 "dma_device_type": 1 00:15:51.311 }, 00:15:51.311 { 00:15:51.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.311 "dma_device_type": 2 00:15:51.311 } 00:15:51.311 ], 00:15:51.311 "driver_specific": {} 00:15:51.311 }' 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.311 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:51.569 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.827 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.827 "name": "BaseBdev2", 00:15:51.827 "aliases": [ 00:15:51.827 "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65" 00:15:51.827 ], 00:15:51.827 "product_name": "Malloc disk", 00:15:51.827 "block_size": 512, 00:15:51.827 "num_blocks": 65536, 00:15:51.827 "uuid": "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65", 00:15:51.827 "assigned_rate_limits": { 00:15:51.827 "rw_ios_per_sec": 0, 00:15:51.827 "rw_mbytes_per_sec": 0, 00:15:51.827 "r_mbytes_per_sec": 0, 00:15:51.827 "w_mbytes_per_sec": 0 00:15:51.827 }, 00:15:51.827 "claimed": true, 00:15:51.827 "claim_type": "exclusive_write", 00:15:51.827 "zoned": false, 00:15:51.827 "supported_io_types": { 00:15:51.827 "read": true, 00:15:51.827 "write": true, 00:15:51.827 "unmap": true, 00:15:51.827 "flush": true, 00:15:51.827 "reset": true, 00:15:51.827 "nvme_admin": false, 00:15:51.827 "nvme_io": false, 00:15:51.827 "nvme_io_md": false, 00:15:51.827 "write_zeroes": true, 00:15:51.827 "zcopy": true, 00:15:51.827 "get_zone_info": false, 00:15:51.827 "zone_management": false, 00:15:51.827 "zone_append": false, 00:15:51.827 "compare": false, 00:15:51.827 "compare_and_write": false, 00:15:51.827 "abort": true, 00:15:51.827 "seek_hole": false, 00:15:51.827 "seek_data": false, 00:15:51.827 "copy": true, 00:15:51.827 "nvme_iov_md": false 00:15:51.827 }, 00:15:51.827 "memory_domains": [ 00:15:51.827 { 00:15:51.827 "dma_device_id": "system", 00:15:51.827 "dma_device_type": 1 00:15:51.827 }, 00:15:51.827 { 00:15:51.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.827 "dma_device_type": 2 00:15:51.827 } 00:15:51.827 ], 00:15:51.827 "driver_specific": {} 00:15:51.827 }' 00:15:51.827 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.827 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.086 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.086 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.086 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.086 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.086 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.086 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.086 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.086 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.352 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.352 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.352 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.614 [2024-07-25 13:58:41.475883] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.614 [2024-07-25 13:58:41.476137] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.614 [2024-07-25 13:58:41.476310] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.614 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.872 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.872 "name": "Existed_Raid", 00:15:52.872 "uuid": "c28643c1-0ff5-4a89-a93b-eef728af4d10", 00:15:52.872 "strip_size_kb": 64, 00:15:52.872 "state": "offline", 00:15:52.872 "raid_level": "raid0", 00:15:52.872 "superblock": true, 00:15:52.872 "num_base_bdevs": 2, 00:15:52.872 "num_base_bdevs_discovered": 1, 00:15:52.872 "num_base_bdevs_operational": 1, 00:15:52.872 "base_bdevs_list": [ 00:15:52.872 { 00:15:52.872 "name": null, 00:15:52.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.872 "is_configured": false, 00:15:52.872 "data_offset": 2048, 00:15:52.872 "data_size": 63488 00:15:52.872 }, 00:15:52.872 { 00:15:52.872 "name": "BaseBdev2", 00:15:52.872 "uuid": "3e0bd1f2-07b6-4e1e-8501-6a5b35bfeb65", 00:15:52.872 "is_configured": true, 00:15:52.872 "data_offset": 2048, 00:15:52.872 "data_size": 63488 00:15:52.872 } 00:15:52.872 ] 00:15:52.872 }' 00:15:52.872 13:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.872 13:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.882 13:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.143 [2024-07-25 13:58:43.126650] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.143 [2024-07-25 13:58:43.127024] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:15:54.400 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.400 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.400 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.400 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 120455 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 120455 ']' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 120455 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120455 00:15:54.657 killing process with pid 120455 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120455' 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 120455 00:15:54.657 13:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 120455 00:15:54.657 [2024-07-25 13:58:43.599362] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.657 [2024-07-25 13:58:43.599485] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.028 13:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:56.028 00:15:56.028 real 0m13.270s 00:15:56.028 user 0m23.454s 00:15:56.028 sys 0m1.552s 00:15:56.028 13:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.028 13:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 ************************************ 00:15:56.028 END TEST raid_state_function_test_sb 00:15:56.028 ************************************ 00:15:56.028 13:58:44 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:56.028 13:58:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:56.028 13:58:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.028 13:58:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 ************************************ 00:15:56.028 START TEST raid_superblock_test 00:15:56.028 ************************************ 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=120855 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 120855 /var/tmp/spdk-raid.sock 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 120855 ']' 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.028 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 [2024-07-25 13:58:44.908565] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:15:56.028 [2024-07-25 13:58:44.909000] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120855 ] 00:15:56.291 [2024-07-25 13:58:45.082402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.548 [2024-07-25 13:58:45.393991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.805 [2024-07-25 13:58:45.604950] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.062 13:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:57.320 malloc1 00:15:57.320 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.578 [2024-07-25 13:58:46.438548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.578 [2024-07-25 13:58:46.438917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.578 [2024-07-25 13:58:46.439027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:57.578 [2024-07-25 13:58:46.439282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.578 [2024-07-25 13:58:46.442001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.578 [2024-07-25 13:58:46.442186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.578 pt1 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.578 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:57.835 malloc2 00:15:57.835 13:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.093 [2024-07-25 13:58:46.994806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.093 [2024-07-25 13:58:46.995195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.093 [2024-07-25 13:58:46.995374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:58.093 [2024-07-25 13:58:46.995512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.093 [2024-07-25 13:58:46.998258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.093 [2024-07-25 13:58:46.998439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.093 pt2 00:15:58.093 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:58.093 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:58.093 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:58.351 [2024-07-25 13:58:47.239033] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.351 [2024-07-25 13:58:47.241443] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.351 [2024-07-25 13:58:47.241802] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:58.351 [2024-07-25 13:58:47.241944] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.351 [2024-07-25 13:58:47.242139] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:58.351 [2024-07-25 13:58:47.242664] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:58.351 [2024-07-25 13:58:47.242809] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:15:58.351 [2024-07-25 13:58:47.243194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.351 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.609 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.609 "name": "raid_bdev1", 00:15:58.609 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:15:58.609 "strip_size_kb": 64, 00:15:58.609 "state": "online", 00:15:58.609 "raid_level": "raid0", 00:15:58.609 "superblock": true, 00:15:58.609 "num_base_bdevs": 2, 00:15:58.609 "num_base_bdevs_discovered": 2, 00:15:58.609 "num_base_bdevs_operational": 2, 00:15:58.609 "base_bdevs_list": [ 00:15:58.609 { 00:15:58.609 "name": "pt1", 00:15:58.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.609 "is_configured": true, 00:15:58.609 "data_offset": 2048, 00:15:58.609 "data_size": 63488 00:15:58.609 }, 00:15:58.609 { 00:15:58.609 "name": "pt2", 00:15:58.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.609 "is_configured": true, 00:15:58.609 "data_offset": 2048, 00:15:58.609 "data_size": 63488 00:15:58.609 } 00:15:58.609 ] 00:15:58.609 }' 00:15:58.609 13:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.609 13:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:59.544 [2024-07-25 13:58:48.463737] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.544 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:59.544 "name": "raid_bdev1", 00:15:59.544 "aliases": [ 00:15:59.544 "9e12d705-3dc1-4476-a154-f821786c195c" 00:15:59.544 ], 00:15:59.544 "product_name": "Raid Volume", 00:15:59.544 "block_size": 512, 00:15:59.544 "num_blocks": 126976, 00:15:59.544 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:15:59.544 "assigned_rate_limits": { 00:15:59.544 "rw_ios_per_sec": 0, 00:15:59.544 "rw_mbytes_per_sec": 0, 00:15:59.544 "r_mbytes_per_sec": 0, 00:15:59.544 "w_mbytes_per_sec": 0 00:15:59.544 }, 00:15:59.544 "claimed": false, 00:15:59.544 "zoned": false, 00:15:59.544 "supported_io_types": { 00:15:59.544 "read": true, 00:15:59.544 "write": true, 00:15:59.544 "unmap": true, 00:15:59.544 "flush": true, 00:15:59.544 "reset": true, 00:15:59.544 "nvme_admin": false, 00:15:59.544 "nvme_io": false, 00:15:59.544 "nvme_io_md": false, 00:15:59.544 "write_zeroes": true, 00:15:59.544 "zcopy": false, 00:15:59.544 "get_zone_info": false, 00:15:59.544 "zone_management": false, 00:15:59.544 "zone_append": false, 00:15:59.544 "compare": false, 00:15:59.544 "compare_and_write": false, 00:15:59.544 "abort": false, 00:15:59.544 "seek_hole": false, 00:15:59.544 "seek_data": false, 00:15:59.544 "copy": false, 00:15:59.544 "nvme_iov_md": false 00:15:59.544 }, 00:15:59.544 "memory_domains": [ 00:15:59.544 { 00:15:59.544 "dma_device_id": "system", 00:15:59.544 "dma_device_type": 1 00:15:59.544 }, 00:15:59.544 { 00:15:59.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.544 "dma_device_type": 2 00:15:59.544 }, 00:15:59.544 { 00:15:59.544 "dma_device_id": "system", 00:15:59.544 "dma_device_type": 1 00:15:59.544 }, 00:15:59.544 { 00:15:59.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.544 "dma_device_type": 2 00:15:59.544 } 00:15:59.544 ], 00:15:59.544 "driver_specific": { 00:15:59.544 "raid": { 00:15:59.544 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:15:59.544 "strip_size_kb": 64, 00:15:59.544 "state": "online", 00:15:59.545 "raid_level": "raid0", 00:15:59.545 "superblock": true, 00:15:59.545 "num_base_bdevs": 2, 00:15:59.545 "num_base_bdevs_discovered": 2, 00:15:59.545 "num_base_bdevs_operational": 2, 00:15:59.545 "base_bdevs_list": [ 00:15:59.545 { 00:15:59.545 "name": "pt1", 00:15:59.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.545 "is_configured": true, 00:15:59.545 "data_offset": 2048, 00:15:59.545 "data_size": 63488 00:15:59.545 }, 00:15:59.545 { 00:15:59.545 "name": "pt2", 00:15:59.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.545 "is_configured": true, 00:15:59.545 "data_offset": 2048, 00:15:59.545 "data_size": 63488 00:15:59.545 } 00:15:59.545 ] 00:15:59.545 } 00:15:59.545 } 00:15:59.545 }' 00:15:59.545 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.545 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:59.545 pt2' 00:15:59.545 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.545 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:59.545 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:59.803 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:59.803 "name": "pt1", 00:15:59.803 "aliases": [ 00:15:59.803 "00000000-0000-0000-0000-000000000001" 00:15:59.803 ], 00:15:59.803 "product_name": "passthru", 00:15:59.803 "block_size": 512, 00:15:59.803 "num_blocks": 65536, 00:15:59.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.803 "assigned_rate_limits": { 00:15:59.803 "rw_ios_per_sec": 0, 00:15:59.803 "rw_mbytes_per_sec": 0, 00:15:59.803 "r_mbytes_per_sec": 0, 00:15:59.803 "w_mbytes_per_sec": 0 00:15:59.803 }, 00:15:59.803 "claimed": true, 00:15:59.803 "claim_type": "exclusive_write", 00:15:59.803 "zoned": false, 00:15:59.803 "supported_io_types": { 00:15:59.803 "read": true, 00:15:59.803 "write": true, 00:15:59.803 "unmap": true, 00:15:59.803 "flush": true, 00:15:59.803 "reset": true, 00:15:59.803 "nvme_admin": false, 00:15:59.803 "nvme_io": false, 00:15:59.803 "nvme_io_md": false, 00:15:59.803 "write_zeroes": true, 00:15:59.803 "zcopy": true, 00:15:59.803 "get_zone_info": false, 00:15:59.803 "zone_management": false, 00:15:59.803 "zone_append": false, 00:15:59.803 "compare": false, 00:15:59.803 "compare_and_write": false, 00:15:59.803 "abort": true, 00:15:59.803 "seek_hole": false, 00:15:59.803 "seek_data": false, 00:15:59.803 "copy": true, 00:15:59.803 "nvme_iov_md": false 00:15:59.803 }, 00:15:59.803 "memory_domains": [ 00:15:59.803 { 00:15:59.803 "dma_device_id": "system", 00:15:59.803 "dma_device_type": 1 00:15:59.803 }, 00:15:59.803 { 00:15:59.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.803 "dma_device_type": 2 00:15:59.803 } 00:15:59.803 ], 00:15:59.803 "driver_specific": { 00:15:59.803 "passthru": { 00:15:59.803 "name": "pt1", 00:15:59.803 "base_bdev_name": "malloc1" 00:15:59.803 } 00:15:59.803 } 00:15:59.803 }' 00:15:59.803 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.803 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.062 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.062 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.062 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.062 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.062 13:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.062 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.062 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.062 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.320 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.320 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.320 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:00.320 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:00.320 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:00.578 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:00.578 "name": "pt2", 00:16:00.578 "aliases": [ 00:16:00.578 "00000000-0000-0000-0000-000000000002" 00:16:00.578 ], 00:16:00.578 "product_name": "passthru", 00:16:00.578 "block_size": 512, 00:16:00.578 "num_blocks": 65536, 00:16:00.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.578 "assigned_rate_limits": { 00:16:00.578 "rw_ios_per_sec": 0, 00:16:00.578 "rw_mbytes_per_sec": 0, 00:16:00.578 "r_mbytes_per_sec": 0, 00:16:00.578 "w_mbytes_per_sec": 0 00:16:00.578 }, 00:16:00.578 "claimed": true, 00:16:00.578 "claim_type": "exclusive_write", 00:16:00.578 "zoned": false, 00:16:00.578 "supported_io_types": { 00:16:00.579 "read": true, 00:16:00.579 "write": true, 00:16:00.579 "unmap": true, 00:16:00.579 "flush": true, 00:16:00.579 "reset": true, 00:16:00.579 "nvme_admin": false, 00:16:00.579 "nvme_io": false, 00:16:00.579 "nvme_io_md": false, 00:16:00.579 "write_zeroes": true, 00:16:00.579 "zcopy": true, 00:16:00.579 "get_zone_info": false, 00:16:00.579 "zone_management": false, 00:16:00.579 "zone_append": false, 00:16:00.579 "compare": false, 00:16:00.579 "compare_and_write": false, 00:16:00.579 "abort": true, 00:16:00.579 "seek_hole": false, 00:16:00.579 "seek_data": false, 00:16:00.579 "copy": true, 00:16:00.579 "nvme_iov_md": false 00:16:00.579 }, 00:16:00.579 "memory_domains": [ 00:16:00.579 { 00:16:00.579 "dma_device_id": "system", 00:16:00.579 "dma_device_type": 1 00:16:00.579 }, 00:16:00.579 { 00:16:00.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.579 "dma_device_type": 2 00:16:00.579 } 00:16:00.579 ], 00:16:00.579 "driver_specific": { 00:16:00.579 "passthru": { 00:16:00.579 "name": "pt2", 00:16:00.579 "base_bdev_name": "malloc2" 00:16:00.579 } 00:16:00.579 } 00:16:00.579 }' 00:16:00.579 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.579 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:00.579 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:00.579 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:00.837 13:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:01.095 [2024-07-25 13:58:50.076291] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.095 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=9e12d705-3dc1-4476-a154-f821786c195c 00:16:01.095 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 9e12d705-3dc1-4476-a154-f821786c195c ']' 00:16:01.095 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:01.353 [2024-07-25 13:58:50.320014] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.353 [2024-07-25 13:58:50.320370] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.353 [2024-07-25 13:58:50.320631] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.353 [2024-07-25 13:58:50.320805] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.353 [2024-07-25 13:58:50.320922] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:16:01.353 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.353 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:01.612 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:01.612 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:01.612 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.612 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:01.870 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:01.870 13:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:02.129 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:02.129 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:02.388 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:02.646 [2024-07-25 13:58:51.624331] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.646 [2024-07-25 13:58:51.626741] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.646 [2024-07-25 13:58:51.626975] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.646 [2024-07-25 13:58:51.627233] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.646 [2024-07-25 13:58:51.627322] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.646 [2024-07-25 13:58:51.627526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:16:02.646 request: 00:16:02.646 { 00:16:02.646 "name": "raid_bdev1", 00:16:02.646 "raid_level": "raid0", 00:16:02.646 "base_bdevs": [ 00:16:02.646 "malloc1", 00:16:02.646 "malloc2" 00:16:02.646 ], 00:16:02.646 "strip_size_kb": 64, 00:16:02.646 "superblock": false, 00:16:02.646 "method": "bdev_raid_create", 00:16:02.646 "req_id": 1 00:16:02.646 } 00:16:02.646 Got JSON-RPC error response 00:16:02.646 response: 00:16:02.646 { 00:16:02.646 "code": -17, 00:16:02.646 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.646 } 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.646 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:02.904 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:02.904 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:02.904 13:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.162 [2024-07-25 13:58:52.148485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.162 [2024-07-25 13:58:52.148901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.162 [2024-07-25 13:58:52.149066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:03.162 [2024-07-25 13:58:52.149211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.162 [2024-07-25 13:58:52.152230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.162 [2024-07-25 13:58:52.152472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.162 [2024-07-25 13:58:52.152729] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.162 [2024-07-25 13:58:52.152915] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.162 pt1 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.162 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.163 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.163 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.163 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.421 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.421 "name": "raid_bdev1", 00:16:03.421 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:16:03.421 "strip_size_kb": 64, 00:16:03.421 "state": "configuring", 00:16:03.421 "raid_level": "raid0", 00:16:03.421 "superblock": true, 00:16:03.421 "num_base_bdevs": 2, 00:16:03.421 "num_base_bdevs_discovered": 1, 00:16:03.421 "num_base_bdevs_operational": 2, 00:16:03.421 "base_bdevs_list": [ 00:16:03.421 { 00:16:03.421 "name": "pt1", 00:16:03.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.421 "is_configured": true, 00:16:03.421 "data_offset": 2048, 00:16:03.421 "data_size": 63488 00:16:03.421 }, 00:16:03.421 { 00:16:03.421 "name": null, 00:16:03.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.421 "is_configured": false, 00:16:03.421 "data_offset": 2048, 00:16:03.421 "data_size": 63488 00:16:03.421 } 00:16:03.421 ] 00:16:03.421 }' 00:16:03.421 13:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.421 13:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.355 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.356 [2024-07-25 13:58:53.369123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.356 [2024-07-25 13:58:53.369577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.356 [2024-07-25 13:58:53.369809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:04.356 [2024-07-25 13:58:53.369986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.356 [2024-07-25 13:58:53.370694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.356 [2024-07-25 13:58:53.370886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.356 [2024-07-25 13:58:53.371147] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.356 [2024-07-25 13:58:53.371299] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.356 [2024-07-25 13:58:53.371554] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:04.356 [2024-07-25 13:58:53.371684] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:04.356 [2024-07-25 13:58:53.371842] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:04.356 [2024-07-25 13:58:53.372250] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:04.356 [2024-07-25 13:58:53.372383] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:04.356 [2024-07-25 13:58:53.372651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.356 pt2 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.356 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.922 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.922 "name": "raid_bdev1", 00:16:04.922 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:16:04.922 "strip_size_kb": 64, 00:16:04.922 "state": "online", 00:16:04.922 "raid_level": "raid0", 00:16:04.922 "superblock": true, 00:16:04.922 "num_base_bdevs": 2, 00:16:04.922 "num_base_bdevs_discovered": 2, 00:16:04.922 "num_base_bdevs_operational": 2, 00:16:04.922 "base_bdevs_list": [ 00:16:04.922 { 00:16:04.922 "name": "pt1", 00:16:04.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.922 "is_configured": true, 00:16:04.922 "data_offset": 2048, 00:16:04.922 "data_size": 63488 00:16:04.922 }, 00:16:04.922 { 00:16:04.922 "name": "pt2", 00:16:04.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.922 "is_configured": true, 00:16:04.922 "data_offset": 2048, 00:16:04.922 "data_size": 63488 00:16:04.922 } 00:16:04.922 ] 00:16:04.922 }' 00:16:04.922 13:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.922 13:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.489 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:05.748 [2024-07-25 13:58:54.621726] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.748 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:05.748 "name": "raid_bdev1", 00:16:05.748 "aliases": [ 00:16:05.748 "9e12d705-3dc1-4476-a154-f821786c195c" 00:16:05.748 ], 00:16:05.748 "product_name": "Raid Volume", 00:16:05.748 "block_size": 512, 00:16:05.748 "num_blocks": 126976, 00:16:05.748 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:16:05.748 "assigned_rate_limits": { 00:16:05.748 "rw_ios_per_sec": 0, 00:16:05.748 "rw_mbytes_per_sec": 0, 00:16:05.748 "r_mbytes_per_sec": 0, 00:16:05.748 "w_mbytes_per_sec": 0 00:16:05.748 }, 00:16:05.748 "claimed": false, 00:16:05.748 "zoned": false, 00:16:05.748 "supported_io_types": { 00:16:05.748 "read": true, 00:16:05.748 "write": true, 00:16:05.748 "unmap": true, 00:16:05.748 "flush": true, 00:16:05.748 "reset": true, 00:16:05.748 "nvme_admin": false, 00:16:05.748 "nvme_io": false, 00:16:05.748 "nvme_io_md": false, 00:16:05.748 "write_zeroes": true, 00:16:05.748 "zcopy": false, 00:16:05.748 "get_zone_info": false, 00:16:05.748 "zone_management": false, 00:16:05.748 "zone_append": false, 00:16:05.748 "compare": false, 00:16:05.748 "compare_and_write": false, 00:16:05.748 "abort": false, 00:16:05.748 "seek_hole": false, 00:16:05.748 "seek_data": false, 00:16:05.748 "copy": false, 00:16:05.748 "nvme_iov_md": false 00:16:05.748 }, 00:16:05.748 "memory_domains": [ 00:16:05.748 { 00:16:05.748 "dma_device_id": "system", 00:16:05.748 "dma_device_type": 1 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.748 "dma_device_type": 2 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "dma_device_id": "system", 00:16:05.748 "dma_device_type": 1 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.748 "dma_device_type": 2 00:16:05.748 } 00:16:05.748 ], 00:16:05.748 "driver_specific": { 00:16:05.748 "raid": { 00:16:05.748 "uuid": "9e12d705-3dc1-4476-a154-f821786c195c", 00:16:05.748 "strip_size_kb": 64, 00:16:05.748 "state": "online", 00:16:05.748 "raid_level": "raid0", 00:16:05.748 "superblock": true, 00:16:05.748 "num_base_bdevs": 2, 00:16:05.748 "num_base_bdevs_discovered": 2, 00:16:05.748 "num_base_bdevs_operational": 2, 00:16:05.748 "base_bdevs_list": [ 00:16:05.748 { 00:16:05.748 "name": "pt1", 00:16:05.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 2048, 00:16:05.748 "data_size": 63488 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "pt2", 00:16:05.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 2048, 00:16:05.748 "data_size": 63488 00:16:05.748 } 00:16:05.748 ] 00:16:05.748 } 00:16:05.748 } 00:16:05.748 }' 00:16:05.748 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.749 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:05.749 pt2' 00:16:05.749 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.749 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:05.749 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.008 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.008 "name": "pt1", 00:16:06.008 "aliases": [ 00:16:06.008 "00000000-0000-0000-0000-000000000001" 00:16:06.008 ], 00:16:06.008 "product_name": "passthru", 00:16:06.008 "block_size": 512, 00:16:06.008 "num_blocks": 65536, 00:16:06.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.008 "assigned_rate_limits": { 00:16:06.008 "rw_ios_per_sec": 0, 00:16:06.008 "rw_mbytes_per_sec": 0, 00:16:06.008 "r_mbytes_per_sec": 0, 00:16:06.008 "w_mbytes_per_sec": 0 00:16:06.008 }, 00:16:06.008 "claimed": true, 00:16:06.008 "claim_type": "exclusive_write", 00:16:06.008 "zoned": false, 00:16:06.008 "supported_io_types": { 00:16:06.008 "read": true, 00:16:06.008 "write": true, 00:16:06.008 "unmap": true, 00:16:06.008 "flush": true, 00:16:06.008 "reset": true, 00:16:06.008 "nvme_admin": false, 00:16:06.008 "nvme_io": false, 00:16:06.008 "nvme_io_md": false, 00:16:06.008 "write_zeroes": true, 00:16:06.008 "zcopy": true, 00:16:06.008 "get_zone_info": false, 00:16:06.008 "zone_management": false, 00:16:06.008 "zone_append": false, 00:16:06.008 "compare": false, 00:16:06.008 "compare_and_write": false, 00:16:06.008 "abort": true, 00:16:06.008 "seek_hole": false, 00:16:06.008 "seek_data": false, 00:16:06.008 "copy": true, 00:16:06.008 "nvme_iov_md": false 00:16:06.008 }, 00:16:06.008 "memory_domains": [ 00:16:06.008 { 00:16:06.008 "dma_device_id": "system", 00:16:06.008 "dma_device_type": 1 00:16:06.008 }, 00:16:06.008 { 00:16:06.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.008 "dma_device_type": 2 00:16:06.008 } 00:16:06.008 ], 00:16:06.008 "driver_specific": { 00:16:06.008 "passthru": { 00:16:06.008 "name": "pt1", 00:16:06.008 "base_bdev_name": "malloc1" 00:16:06.008 } 00:16:06.008 } 00:16:06.008 }' 00:16:06.008 13:58:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.008 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.266 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:06.524 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:06.524 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:06.524 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:06.524 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:06.782 "name": "pt2", 00:16:06.782 "aliases": [ 00:16:06.782 "00000000-0000-0000-0000-000000000002" 00:16:06.782 ], 00:16:06.782 "product_name": "passthru", 00:16:06.782 "block_size": 512, 00:16:06.782 "num_blocks": 65536, 00:16:06.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.782 "assigned_rate_limits": { 00:16:06.782 "rw_ios_per_sec": 0, 00:16:06.782 "rw_mbytes_per_sec": 0, 00:16:06.782 "r_mbytes_per_sec": 0, 00:16:06.782 "w_mbytes_per_sec": 0 00:16:06.782 }, 00:16:06.782 "claimed": true, 00:16:06.782 "claim_type": "exclusive_write", 00:16:06.782 "zoned": false, 00:16:06.782 "supported_io_types": { 00:16:06.782 "read": true, 00:16:06.782 "write": true, 00:16:06.782 "unmap": true, 00:16:06.782 "flush": true, 00:16:06.782 "reset": true, 00:16:06.782 "nvme_admin": false, 00:16:06.782 "nvme_io": false, 00:16:06.782 "nvme_io_md": false, 00:16:06.782 "write_zeroes": true, 00:16:06.782 "zcopy": true, 00:16:06.782 "get_zone_info": false, 00:16:06.782 "zone_management": false, 00:16:06.782 "zone_append": false, 00:16:06.782 "compare": false, 00:16:06.782 "compare_and_write": false, 00:16:06.782 "abort": true, 00:16:06.782 "seek_hole": false, 00:16:06.782 "seek_data": false, 00:16:06.782 "copy": true, 00:16:06.782 "nvme_iov_md": false 00:16:06.782 }, 00:16:06.782 "memory_domains": [ 00:16:06.782 { 00:16:06.782 "dma_device_id": "system", 00:16:06.782 "dma_device_type": 1 00:16:06.782 }, 00:16:06.782 { 00:16:06.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.782 "dma_device_type": 2 00:16:06.782 } 00:16:06.782 ], 00:16:06.782 "driver_specific": { 00:16:06.782 "passthru": { 00:16:06.782 "name": "pt2", 00:16:06.782 "base_bdev_name": "malloc2" 00:16:06.782 } 00:16:06.782 } 00:16:06.782 }' 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:06.782 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:07.040 13:58:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:07.298 [2024-07-25 13:58:56.198338] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 9e12d705-3dc1-4476-a154-f821786c195c '!=' 9e12d705-3dc1-4476-a154-f821786c195c ']' 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 120855 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 120855 ']' 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 120855 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120855 00:16:07.298 killing process with pid 120855 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120855' 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 120855 00:16:07.298 13:58:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 120855 00:16:07.298 [2024-07-25 13:58:56.245551] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.298 [2024-07-25 13:58:56.245648] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.298 [2024-07-25 13:58:56.245740] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.298 [2024-07-25 13:58:56.245829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:07.557 [2024-07-25 13:58:56.411345] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.491 13:58:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:08.491 00:16:08.491 real 0m12.681s 00:16:08.491 user 0m22.513s 00:16:08.491 sys 0m1.498s 00:16:08.491 13:58:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.491 13:58:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.491 ************************************ 00:16:08.491 END TEST raid_superblock_test 00:16:08.491 ************************************ 00:16:08.747 13:58:57 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:16:08.747 13:58:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:08.747 13:58:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.747 13:58:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 ************************************ 00:16:08.747 START TEST raid_read_error_test 00:16:08.747 ************************************ 00:16:08.747 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:16:08.747 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:16:08.747 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.rRv1PYVy2Y 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=121236 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 121236 /var/tmp/spdk-raid.sock 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 121236 ']' 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.748 13:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.748 [2024-07-25 13:58:57.642472] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:08.748 [2024-07-25 13:58:57.642818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121236 ] 00:16:09.005 [2024-07-25 13:58:57.798936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.005 [2024-07-25 13:58:58.040595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.263 [2024-07-25 13:58:58.230340] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.829 13:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.829 13:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:09.829 13:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:16:09.829 13:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.087 BaseBdev1_malloc 00:16:10.087 13:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:10.345 true 00:16:10.345 13:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:10.604 [2024-07-25 13:58:59.509895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:10.604 [2024-07-25 13:58:59.510295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.604 [2024-07-25 13:58:59.510514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:10.604 [2024-07-25 13:58:59.510672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.604 [2024-07-25 13:58:59.513488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.604 [2024-07-25 13:58:59.513688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.604 BaseBdev1 00:16:10.604 13:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:16:10.604 13:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.861 BaseBdev2_malloc 00:16:10.861 13:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:11.426 true 00:16:11.426 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:11.426 [2024-07-25 13:59:00.442039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:11.426 [2024-07-25 13:59:00.442424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.426 [2024-07-25 13:59:00.442607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:11.426 [2024-07-25 13:59:00.442771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.426 [2024-07-25 13:59:00.445509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.426 [2024-07-25 13:59:00.445695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.426 BaseBdev2 00:16:11.426 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:11.991 [2024-07-25 13:59:00.762255] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.991 [2024-07-25 13:59:00.764987] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.991 [2024-07-25 13:59:00.765408] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:11.991 [2024-07-25 13:59:00.765560] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:11.991 [2024-07-25 13:59:00.765856] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:11.991 [2024-07-25 13:59:00.766415] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:11.991 [2024-07-25 13:59:00.766547] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:11.991 [2024-07-25 13:59:00.766924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.991 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.992 13:59:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.249 13:59:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.249 "name": "raid_bdev1", 00:16:12.249 "uuid": "71cc9909-1a9c-4f6f-976e-a1deafd5895b", 00:16:12.249 "strip_size_kb": 64, 00:16:12.249 "state": "online", 00:16:12.249 "raid_level": "raid0", 00:16:12.249 "superblock": true, 00:16:12.249 "num_base_bdevs": 2, 00:16:12.249 "num_base_bdevs_discovered": 2, 00:16:12.249 "num_base_bdevs_operational": 2, 00:16:12.249 "base_bdevs_list": [ 00:16:12.249 { 00:16:12.249 "name": "BaseBdev1", 00:16:12.249 "uuid": "2956b84f-0b1f-513e-b3d6-c426d77dcb9d", 00:16:12.249 "is_configured": true, 00:16:12.249 "data_offset": 2048, 00:16:12.249 "data_size": 63488 00:16:12.249 }, 00:16:12.249 { 00:16:12.249 "name": "BaseBdev2", 00:16:12.249 "uuid": "607cbf19-633c-5f14-b9d0-c6a21089c5fc", 00:16:12.249 "is_configured": true, 00:16:12.249 "data_offset": 2048, 00:16:12.249 "data_size": 63488 00:16:12.249 } 00:16:12.249 ] 00:16:12.249 }' 00:16:12.249 13:59:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.249 13:59:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.866 13:59:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:16:12.866 13:59:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:12.866 [2024-07-25 13:59:01.868598] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:13.798 13:59:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=2 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.056 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.314 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.314 "name": "raid_bdev1", 00:16:14.314 "uuid": "71cc9909-1a9c-4f6f-976e-a1deafd5895b", 00:16:14.314 "strip_size_kb": 64, 00:16:14.314 "state": "online", 00:16:14.314 "raid_level": "raid0", 00:16:14.314 "superblock": true, 00:16:14.314 "num_base_bdevs": 2, 00:16:14.314 "num_base_bdevs_discovered": 2, 00:16:14.314 "num_base_bdevs_operational": 2, 00:16:14.314 "base_bdevs_list": [ 00:16:14.314 { 00:16:14.314 "name": "BaseBdev1", 00:16:14.314 "uuid": "2956b84f-0b1f-513e-b3d6-c426d77dcb9d", 00:16:14.314 "is_configured": true, 00:16:14.314 "data_offset": 2048, 00:16:14.314 "data_size": 63488 00:16:14.314 }, 00:16:14.314 { 00:16:14.314 "name": "BaseBdev2", 00:16:14.314 "uuid": "607cbf19-633c-5f14-b9d0-c6a21089c5fc", 00:16:14.314 "is_configured": true, 00:16:14.314 "data_offset": 2048, 00:16:14.314 "data_size": 63488 00:16:14.314 } 00:16:14.314 ] 00:16:14.314 }' 00:16:14.314 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.314 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.246 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:15.246 [2024-07-25 13:59:04.281012] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.246 [2024-07-25 13:59:04.281330] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.246 [2024-07-25 13:59:04.284601] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.246 [2024-07-25 13:59:04.284798] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.246 [2024-07-25 13:59:04.284987] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.246 [2024-07-25 13:59:04.285104] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:15.246 0 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 121236 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 121236 ']' 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 121236 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121236 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121236' 00:16:15.503 killing process with pid 121236 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 121236 00:16:15.503 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 121236 00:16:15.503 [2024-07-25 13:59:04.325331] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.503 [2024-07-25 13:59:04.439060] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.rRv1PYVy2Y 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:16:16.941 ************************************ 00:16:16.941 END TEST raid_read_error_test 00:16:16.941 ************************************ 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:16:16.941 00:16:16.941 real 0m8.135s 00:16:16.941 user 0m12.434s 00:16:16.941 sys 0m0.926s 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.941 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 13:59:05 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:16:16.941 13:59:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:16.941 13:59:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.941 13:59:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 ************************************ 00:16:16.941 START TEST raid_write_error_test 00:16:16.941 ************************************ 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.lkXk5cxKi0 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=121439 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 121439 /var/tmp/spdk-raid.sock 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 121439 ']' 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.941 13:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 [2024-07-25 13:59:05.838026] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:16.941 [2024-07-25 13:59:05.839233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121439 ] 00:16:17.199 [2024-07-25 13:59:06.016106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.457 [2024-07-25 13:59:06.270184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.457 [2024-07-25 13:59:06.482633] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.025 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.025 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:18.025 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:16:18.025 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.284 BaseBdev1_malloc 00:16:18.284 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:18.542 true 00:16:18.542 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:18.799 [2024-07-25 13:59:07.759529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:18.799 [2024-07-25 13:59:07.759997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.800 [2024-07-25 13:59:07.760193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:18.800 [2024-07-25 13:59:07.760358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.800 [2024-07-25 13:59:07.763174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.800 [2024-07-25 13:59:07.763357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.800 BaseBdev1 00:16:18.800 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:16:18.800 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:19.366 BaseBdev2_malloc 00:16:19.366 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:19.366 true 00:16:19.366 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:19.932 [2024-07-25 13:59:08.684707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:19.932 [2024-07-25 13:59:08.685077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.932 [2024-07-25 13:59:08.685251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:19.932 [2024-07-25 13:59:08.685380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.932 [2024-07-25 13:59:08.688084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.932 [2024-07-25 13:59:08.688259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:19.932 BaseBdev2 00:16:19.932 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:19.932 [2024-07-25 13:59:08.964838] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.932 [2024-07-25 13:59:08.967337] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.932 [2024-07-25 13:59:08.967712] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:19.932 [2024-07-25 13:59:08.967847] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:19.932 [2024-07-25 13:59:08.968034] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:19.932 [2024-07-25 13:59:08.968499] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:19.932 [2024-07-25 13:59:08.968628] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:19.932 [2024-07-25 13:59:08.968996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.211 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.211 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.211 "name": "raid_bdev1", 00:16:20.211 "uuid": "730468a0-feac-491e-89f1-32d8e6408cdc", 00:16:20.211 "strip_size_kb": 64, 00:16:20.211 "state": "online", 00:16:20.211 "raid_level": "raid0", 00:16:20.211 "superblock": true, 00:16:20.211 "num_base_bdevs": 2, 00:16:20.211 "num_base_bdevs_discovered": 2, 00:16:20.211 "num_base_bdevs_operational": 2, 00:16:20.211 "base_bdevs_list": [ 00:16:20.211 { 00:16:20.211 "name": "BaseBdev1", 00:16:20.211 "uuid": "638e1458-cc2e-5f38-bbdc-ecf6502156f6", 00:16:20.211 "is_configured": true, 00:16:20.211 "data_offset": 2048, 00:16:20.211 "data_size": 63488 00:16:20.211 }, 00:16:20.211 { 00:16:20.211 "name": "BaseBdev2", 00:16:20.211 "uuid": "ca5c5ebc-3a75-5dd7-b8d6-433cfbb393dd", 00:16:20.211 "is_configured": true, 00:16:20.211 "data_offset": 2048, 00:16:20.211 "data_size": 63488 00:16:20.211 } 00:16:20.211 ] 00:16:20.211 }' 00:16:20.211 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.211 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.147 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:16:21.147 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:21.147 [2024-07-25 13:59:10.018525] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:22.080 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=2 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.338 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.595 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.595 "name": "raid_bdev1", 00:16:22.595 "uuid": "730468a0-feac-491e-89f1-32d8e6408cdc", 00:16:22.595 "strip_size_kb": 64, 00:16:22.595 "state": "online", 00:16:22.595 "raid_level": "raid0", 00:16:22.595 "superblock": true, 00:16:22.595 "num_base_bdevs": 2, 00:16:22.596 "num_base_bdevs_discovered": 2, 00:16:22.596 "num_base_bdevs_operational": 2, 00:16:22.596 "base_bdevs_list": [ 00:16:22.596 { 00:16:22.596 "name": "BaseBdev1", 00:16:22.596 "uuid": "638e1458-cc2e-5f38-bbdc-ecf6502156f6", 00:16:22.596 "is_configured": true, 00:16:22.596 "data_offset": 2048, 00:16:22.596 "data_size": 63488 00:16:22.596 }, 00:16:22.596 { 00:16:22.596 "name": "BaseBdev2", 00:16:22.596 "uuid": "ca5c5ebc-3a75-5dd7-b8d6-433cfbb393dd", 00:16:22.596 "is_configured": true, 00:16:22.596 "data_offset": 2048, 00:16:22.596 "data_size": 63488 00:16:22.596 } 00:16:22.596 ] 00:16:22.596 }' 00:16:22.596 13:59:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.596 13:59:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.161 13:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:23.726 [2024-07-25 13:59:12.484508] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.726 [2024-07-25 13:59:12.484883] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.726 [2024-07-25 13:59:12.488121] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.726 [2024-07-25 13:59:12.488297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.726 [2024-07-25 13:59:12.488378] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.726 [2024-07-25 13:59:12.488568] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:23.726 0 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 121439 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 121439 ']' 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 121439 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121439 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121439' 00:16:23.726 killing process with pid 121439 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 121439 00:16:23.726 13:59:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 121439 00:16:23.726 [2024-07-25 13:59:12.541385] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.726 [2024-07-25 13:59:12.654759] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.lkXk5cxKi0 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:16:25.109 ************************************ 00:16:25.109 END TEST raid_write_error_test 00:16:25.109 ************************************ 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:16:25.109 00:16:25.109 real 0m8.103s 00:16:25.109 user 0m12.511s 00:16:25.109 sys 0m0.894s 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.109 13:59:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.109 13:59:13 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:16:25.109 13:59:13 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:16:25.109 13:59:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:25.109 13:59:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.109 13:59:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.109 ************************************ 00:16:25.109 START TEST raid_state_function_test 00:16:25.109 ************************************ 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121643 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:25.109 Process raid pid: 121643 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121643' 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121643 /var/tmp/spdk-raid.sock 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 121643 ']' 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:25.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.109 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.109 [2024-07-25 13:59:13.985620] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:25.109 [2024-07-25 13:59:13.986079] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.367 [2024-07-25 13:59:14.158398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.367 [2024-07-25 13:59:14.405144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.625 [2024-07-25 13:59:14.654273] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.191 13:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.191 13:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:26.191 13:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:26.191 [2024-07-25 13:59:15.208727] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.191 [2024-07-25 13:59:15.209121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.191 [2024-07-25 13:59:15.209252] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.191 [2024-07-25 13:59:15.209326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.191 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.757 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.757 "name": "Existed_Raid", 00:16:26.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.757 "strip_size_kb": 64, 00:16:26.757 "state": "configuring", 00:16:26.757 "raid_level": "concat", 00:16:26.757 "superblock": false, 00:16:26.757 "num_base_bdevs": 2, 00:16:26.757 "num_base_bdevs_discovered": 0, 00:16:26.757 "num_base_bdevs_operational": 2, 00:16:26.757 "base_bdevs_list": [ 00:16:26.757 { 00:16:26.757 "name": "BaseBdev1", 00:16:26.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.757 "is_configured": false, 00:16:26.757 "data_offset": 0, 00:16:26.757 "data_size": 0 00:16:26.757 }, 00:16:26.757 { 00:16:26.757 "name": "BaseBdev2", 00:16:26.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.757 "is_configured": false, 00:16:26.757 "data_offset": 0, 00:16:26.757 "data_size": 0 00:16:26.757 } 00:16:26.757 ] 00:16:26.757 }' 00:16:26.757 13:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.757 13:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.347 13:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.606 [2024-07-25 13:59:16.444869] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.606 [2024-07-25 13:59:16.445135] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:16:27.606 13:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:27.864 [2024-07-25 13:59:16.732960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.864 [2024-07-25 13:59:16.733264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.864 [2024-07-25 13:59:16.733399] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.864 [2024-07-25 13:59:16.733468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.864 13:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.122 [2024-07-25 13:59:17.044731] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.122 BaseBdev1 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:28.122 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.380 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.637 [ 00:16:28.637 { 00:16:28.637 "name": "BaseBdev1", 00:16:28.637 "aliases": [ 00:16:28.637 "31bed2bc-bbf2-44f5-b007-1638cddfa18b" 00:16:28.637 ], 00:16:28.637 "product_name": "Malloc disk", 00:16:28.637 "block_size": 512, 00:16:28.637 "num_blocks": 65536, 00:16:28.637 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:28.637 "assigned_rate_limits": { 00:16:28.637 "rw_ios_per_sec": 0, 00:16:28.637 "rw_mbytes_per_sec": 0, 00:16:28.637 "r_mbytes_per_sec": 0, 00:16:28.637 "w_mbytes_per_sec": 0 00:16:28.637 }, 00:16:28.637 "claimed": true, 00:16:28.637 "claim_type": "exclusive_write", 00:16:28.637 "zoned": false, 00:16:28.637 "supported_io_types": { 00:16:28.637 "read": true, 00:16:28.637 "write": true, 00:16:28.637 "unmap": true, 00:16:28.637 "flush": true, 00:16:28.637 "reset": true, 00:16:28.637 "nvme_admin": false, 00:16:28.637 "nvme_io": false, 00:16:28.637 "nvme_io_md": false, 00:16:28.637 "write_zeroes": true, 00:16:28.637 "zcopy": true, 00:16:28.637 "get_zone_info": false, 00:16:28.637 "zone_management": false, 00:16:28.637 "zone_append": false, 00:16:28.637 "compare": false, 00:16:28.637 "compare_and_write": false, 00:16:28.637 "abort": true, 00:16:28.637 "seek_hole": false, 00:16:28.637 "seek_data": false, 00:16:28.637 "copy": true, 00:16:28.637 "nvme_iov_md": false 00:16:28.637 }, 00:16:28.637 "memory_domains": [ 00:16:28.637 { 00:16:28.637 "dma_device_id": "system", 00:16:28.637 "dma_device_type": 1 00:16:28.637 }, 00:16:28.637 { 00:16:28.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.637 "dma_device_type": 2 00:16:28.637 } 00:16:28.637 ], 00:16:28.637 "driver_specific": {} 00:16:28.637 } 00:16:28.637 ] 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.637 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.894 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.894 "name": "Existed_Raid", 00:16:28.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.894 "strip_size_kb": 64, 00:16:28.894 "state": "configuring", 00:16:28.894 "raid_level": "concat", 00:16:28.894 "superblock": false, 00:16:28.894 "num_base_bdevs": 2, 00:16:28.894 "num_base_bdevs_discovered": 1, 00:16:28.894 "num_base_bdevs_operational": 2, 00:16:28.894 "base_bdevs_list": [ 00:16:28.894 { 00:16:28.894 "name": "BaseBdev1", 00:16:28.894 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:28.894 "is_configured": true, 00:16:28.894 "data_offset": 0, 00:16:28.894 "data_size": 65536 00:16:28.894 }, 00:16:28.894 { 00:16:28.894 "name": "BaseBdev2", 00:16:28.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.894 "is_configured": false, 00:16:28.894 "data_offset": 0, 00:16:28.894 "data_size": 0 00:16:28.894 } 00:16:28.894 ] 00:16:28.894 }' 00:16:28.894 13:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.894 13:59:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.461 13:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.719 [2024-07-25 13:59:18.685185] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.719 [2024-07-25 13:59:18.685453] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:16:29.719 13:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:29.978 [2024-07-25 13:59:19.013298] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.978 [2024-07-25 13:59:19.015680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.978 [2024-07-25 13:59:19.015879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.236 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.495 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.495 "name": "Existed_Raid", 00:16:30.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.495 "strip_size_kb": 64, 00:16:30.495 "state": "configuring", 00:16:30.495 "raid_level": "concat", 00:16:30.495 "superblock": false, 00:16:30.495 "num_base_bdevs": 2, 00:16:30.495 "num_base_bdevs_discovered": 1, 00:16:30.495 "num_base_bdevs_operational": 2, 00:16:30.495 "base_bdevs_list": [ 00:16:30.495 { 00:16:30.495 "name": "BaseBdev1", 00:16:30.495 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:30.495 "is_configured": true, 00:16:30.495 "data_offset": 0, 00:16:30.495 "data_size": 65536 00:16:30.495 }, 00:16:30.495 { 00:16:30.495 "name": "BaseBdev2", 00:16:30.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.495 "is_configured": false, 00:16:30.495 "data_offset": 0, 00:16:30.495 "data_size": 0 00:16:30.495 } 00:16:30.495 ] 00:16:30.495 }' 00:16:30.495 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.495 13:59:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.061 13:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.320 [2024-07-25 13:59:20.298428] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.320 [2024-07-25 13:59:20.298802] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:31.320 [2024-07-25 13:59:20.298852] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:31.320 [2024-07-25 13:59:20.299100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:31.320 [2024-07-25 13:59:20.299592] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:31.320 [2024-07-25 13:59:20.299745] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:16:31.320 [2024-07-25 13:59:20.300150] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.320 BaseBdev2 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.320 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:31.579 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.837 [ 00:16:31.837 { 00:16:31.837 "name": "BaseBdev2", 00:16:31.837 "aliases": [ 00:16:31.837 "9fdec12e-9705-4b21-8d60-d650452ce331" 00:16:31.837 ], 00:16:31.837 "product_name": "Malloc disk", 00:16:31.837 "block_size": 512, 00:16:31.837 "num_blocks": 65536, 00:16:31.837 "uuid": "9fdec12e-9705-4b21-8d60-d650452ce331", 00:16:31.837 "assigned_rate_limits": { 00:16:31.837 "rw_ios_per_sec": 0, 00:16:31.837 "rw_mbytes_per_sec": 0, 00:16:31.837 "r_mbytes_per_sec": 0, 00:16:31.837 "w_mbytes_per_sec": 0 00:16:31.837 }, 00:16:31.837 "claimed": true, 00:16:31.837 "claim_type": "exclusive_write", 00:16:31.837 "zoned": false, 00:16:31.837 "supported_io_types": { 00:16:31.837 "read": true, 00:16:31.837 "write": true, 00:16:31.837 "unmap": true, 00:16:31.837 "flush": true, 00:16:31.837 "reset": true, 00:16:31.837 "nvme_admin": false, 00:16:31.837 "nvme_io": false, 00:16:31.837 "nvme_io_md": false, 00:16:31.837 "write_zeroes": true, 00:16:31.837 "zcopy": true, 00:16:31.837 "get_zone_info": false, 00:16:31.837 "zone_management": false, 00:16:31.837 "zone_append": false, 00:16:31.837 "compare": false, 00:16:31.837 "compare_and_write": false, 00:16:31.837 "abort": true, 00:16:31.837 "seek_hole": false, 00:16:31.837 "seek_data": false, 00:16:31.837 "copy": true, 00:16:31.837 "nvme_iov_md": false 00:16:31.837 }, 00:16:31.837 "memory_domains": [ 00:16:31.837 { 00:16:31.837 "dma_device_id": "system", 00:16:31.837 "dma_device_type": 1 00:16:31.837 }, 00:16:31.837 { 00:16:31.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.837 "dma_device_type": 2 00:16:31.837 } 00:16:31.837 ], 00:16:31.837 "driver_specific": {} 00:16:31.837 } 00:16:31.837 ] 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.837 13:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.095 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.095 "name": "Existed_Raid", 00:16:32.095 "uuid": "02621db8-6bd1-40af-88da-2d2450666c35", 00:16:32.095 "strip_size_kb": 64, 00:16:32.095 "state": "online", 00:16:32.095 "raid_level": "concat", 00:16:32.095 "superblock": false, 00:16:32.095 "num_base_bdevs": 2, 00:16:32.095 "num_base_bdevs_discovered": 2, 00:16:32.095 "num_base_bdevs_operational": 2, 00:16:32.095 "base_bdevs_list": [ 00:16:32.095 { 00:16:32.095 "name": "BaseBdev1", 00:16:32.095 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:32.095 "is_configured": true, 00:16:32.095 "data_offset": 0, 00:16:32.095 "data_size": 65536 00:16:32.095 }, 00:16:32.095 { 00:16:32.095 "name": "BaseBdev2", 00:16:32.095 "uuid": "9fdec12e-9705-4b21-8d60-d650452ce331", 00:16:32.095 "is_configured": true, 00:16:32.095 "data_offset": 0, 00:16:32.095 "data_size": 65536 00:16:32.095 } 00:16:32.095 ] 00:16:32.095 }' 00:16:32.095 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.095 13:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:33.030 13:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:33.030 [2024-07-25 13:59:22.066681] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:33.289 "name": "Existed_Raid", 00:16:33.289 "aliases": [ 00:16:33.289 "02621db8-6bd1-40af-88da-2d2450666c35" 00:16:33.289 ], 00:16:33.289 "product_name": "Raid Volume", 00:16:33.289 "block_size": 512, 00:16:33.289 "num_blocks": 131072, 00:16:33.289 "uuid": "02621db8-6bd1-40af-88da-2d2450666c35", 00:16:33.289 "assigned_rate_limits": { 00:16:33.289 "rw_ios_per_sec": 0, 00:16:33.289 "rw_mbytes_per_sec": 0, 00:16:33.289 "r_mbytes_per_sec": 0, 00:16:33.289 "w_mbytes_per_sec": 0 00:16:33.289 }, 00:16:33.289 "claimed": false, 00:16:33.289 "zoned": false, 00:16:33.289 "supported_io_types": { 00:16:33.289 "read": true, 00:16:33.289 "write": true, 00:16:33.289 "unmap": true, 00:16:33.289 "flush": true, 00:16:33.289 "reset": true, 00:16:33.289 "nvme_admin": false, 00:16:33.289 "nvme_io": false, 00:16:33.289 "nvme_io_md": false, 00:16:33.289 "write_zeroes": true, 00:16:33.289 "zcopy": false, 00:16:33.289 "get_zone_info": false, 00:16:33.289 "zone_management": false, 00:16:33.289 "zone_append": false, 00:16:33.289 "compare": false, 00:16:33.289 "compare_and_write": false, 00:16:33.289 "abort": false, 00:16:33.289 "seek_hole": false, 00:16:33.289 "seek_data": false, 00:16:33.289 "copy": false, 00:16:33.289 "nvme_iov_md": false 00:16:33.289 }, 00:16:33.289 "memory_domains": [ 00:16:33.289 { 00:16:33.289 "dma_device_id": "system", 00:16:33.289 "dma_device_type": 1 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.289 "dma_device_type": 2 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "dma_device_id": "system", 00:16:33.289 "dma_device_type": 1 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.289 "dma_device_type": 2 00:16:33.289 } 00:16:33.289 ], 00:16:33.289 "driver_specific": { 00:16:33.289 "raid": { 00:16:33.289 "uuid": "02621db8-6bd1-40af-88da-2d2450666c35", 00:16:33.289 "strip_size_kb": 64, 00:16:33.289 "state": "online", 00:16:33.289 "raid_level": "concat", 00:16:33.289 "superblock": false, 00:16:33.289 "num_base_bdevs": 2, 00:16:33.289 "num_base_bdevs_discovered": 2, 00:16:33.289 "num_base_bdevs_operational": 2, 00:16:33.289 "base_bdevs_list": [ 00:16:33.289 { 00:16:33.289 "name": "BaseBdev1", 00:16:33.289 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 0, 00:16:33.289 "data_size": 65536 00:16:33.289 }, 00:16:33.289 { 00:16:33.289 "name": "BaseBdev2", 00:16:33.289 "uuid": "9fdec12e-9705-4b21-8d60-d650452ce331", 00:16:33.289 "is_configured": true, 00:16:33.289 "data_offset": 0, 00:16:33.289 "data_size": 65536 00:16:33.289 } 00:16:33.289 ] 00:16:33.289 } 00:16:33.289 } 00:16:33.289 }' 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:33.289 BaseBdev2' 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:33.289 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:33.547 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:33.547 "name": "BaseBdev1", 00:16:33.547 "aliases": [ 00:16:33.547 "31bed2bc-bbf2-44f5-b007-1638cddfa18b" 00:16:33.547 ], 00:16:33.547 "product_name": "Malloc disk", 00:16:33.547 "block_size": 512, 00:16:33.547 "num_blocks": 65536, 00:16:33.547 "uuid": "31bed2bc-bbf2-44f5-b007-1638cddfa18b", 00:16:33.547 "assigned_rate_limits": { 00:16:33.547 "rw_ios_per_sec": 0, 00:16:33.547 "rw_mbytes_per_sec": 0, 00:16:33.547 "r_mbytes_per_sec": 0, 00:16:33.548 "w_mbytes_per_sec": 0 00:16:33.548 }, 00:16:33.548 "claimed": true, 00:16:33.548 "claim_type": "exclusive_write", 00:16:33.548 "zoned": false, 00:16:33.548 "supported_io_types": { 00:16:33.548 "read": true, 00:16:33.548 "write": true, 00:16:33.548 "unmap": true, 00:16:33.548 "flush": true, 00:16:33.548 "reset": true, 00:16:33.548 "nvme_admin": false, 00:16:33.548 "nvme_io": false, 00:16:33.548 "nvme_io_md": false, 00:16:33.548 "write_zeroes": true, 00:16:33.548 "zcopy": true, 00:16:33.548 "get_zone_info": false, 00:16:33.548 "zone_management": false, 00:16:33.548 "zone_append": false, 00:16:33.548 "compare": false, 00:16:33.548 "compare_and_write": false, 00:16:33.548 "abort": true, 00:16:33.548 "seek_hole": false, 00:16:33.548 "seek_data": false, 00:16:33.548 "copy": true, 00:16:33.548 "nvme_iov_md": false 00:16:33.548 }, 00:16:33.548 "memory_domains": [ 00:16:33.548 { 00:16:33.548 "dma_device_id": "system", 00:16:33.548 "dma_device_type": 1 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.548 "dma_device_type": 2 00:16:33.548 } 00:16:33.548 ], 00:16:33.548 "driver_specific": {} 00:16:33.548 }' 00:16:33.548 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.548 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.548 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:33.548 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.548 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:33.806 13:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:34.065 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:34.065 "name": "BaseBdev2", 00:16:34.065 "aliases": [ 00:16:34.065 "9fdec12e-9705-4b21-8d60-d650452ce331" 00:16:34.065 ], 00:16:34.065 "product_name": "Malloc disk", 00:16:34.065 "block_size": 512, 00:16:34.065 "num_blocks": 65536, 00:16:34.065 "uuid": "9fdec12e-9705-4b21-8d60-d650452ce331", 00:16:34.065 "assigned_rate_limits": { 00:16:34.065 "rw_ios_per_sec": 0, 00:16:34.065 "rw_mbytes_per_sec": 0, 00:16:34.065 "r_mbytes_per_sec": 0, 00:16:34.065 "w_mbytes_per_sec": 0 00:16:34.065 }, 00:16:34.065 "claimed": true, 00:16:34.065 "claim_type": "exclusive_write", 00:16:34.065 "zoned": false, 00:16:34.065 "supported_io_types": { 00:16:34.065 "read": true, 00:16:34.065 "write": true, 00:16:34.065 "unmap": true, 00:16:34.065 "flush": true, 00:16:34.065 "reset": true, 00:16:34.065 "nvme_admin": false, 00:16:34.065 "nvme_io": false, 00:16:34.065 "nvme_io_md": false, 00:16:34.065 "write_zeroes": true, 00:16:34.065 "zcopy": true, 00:16:34.065 "get_zone_info": false, 00:16:34.065 "zone_management": false, 00:16:34.065 "zone_append": false, 00:16:34.065 "compare": false, 00:16:34.065 "compare_and_write": false, 00:16:34.065 "abort": true, 00:16:34.065 "seek_hole": false, 00:16:34.065 "seek_data": false, 00:16:34.065 "copy": true, 00:16:34.065 "nvme_iov_md": false 00:16:34.065 }, 00:16:34.065 "memory_domains": [ 00:16:34.065 { 00:16:34.065 "dma_device_id": "system", 00:16:34.065 "dma_device_type": 1 00:16:34.065 }, 00:16:34.065 { 00:16:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.065 "dma_device_type": 2 00:16:34.065 } 00:16:34.065 ], 00:16:34.065 "driver_specific": {} 00:16:34.065 }' 00:16:34.065 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.323 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:34.581 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:34.581 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.581 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:34.581 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:34.581 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.839 [2024-07-25 13:59:23.750864] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.839 [2024-07-25 13:59:23.751109] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.839 [2024-07-25 13:59:23.751278] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.839 13:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.407 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.407 "name": "Existed_Raid", 00:16:35.407 "uuid": "02621db8-6bd1-40af-88da-2d2450666c35", 00:16:35.407 "strip_size_kb": 64, 00:16:35.407 "state": "offline", 00:16:35.407 "raid_level": "concat", 00:16:35.407 "superblock": false, 00:16:35.407 "num_base_bdevs": 2, 00:16:35.407 "num_base_bdevs_discovered": 1, 00:16:35.407 "num_base_bdevs_operational": 1, 00:16:35.407 "base_bdevs_list": [ 00:16:35.407 { 00:16:35.407 "name": null, 00:16:35.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.407 "is_configured": false, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 65536 00:16:35.407 }, 00:16:35.407 { 00:16:35.407 "name": "BaseBdev2", 00:16:35.407 "uuid": "9fdec12e-9705-4b21-8d60-d650452ce331", 00:16:35.407 "is_configured": true, 00:16:35.407 "data_offset": 0, 00:16:35.407 "data_size": 65536 00:16:35.407 } 00:16:35.407 ] 00:16:35.407 }' 00:16:35.407 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.407 13:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.001 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:36.001 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:36.001 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.001 13:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:36.260 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:36.260 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:36.260 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:36.260 [2024-07-25 13:59:25.279973] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:36.260 [2024-07-25 13:59:25.280270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:16:36.518 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:36.518 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:36.518 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.518 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121643 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 121643 ']' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 121643 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121643 00:16:36.777 killing process with pid 121643 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121643' 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 121643 00:16:36.777 13:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 121643 00:16:36.777 [2024-07-25 13:59:25.697609] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.777 [2024-07-25 13:59:25.697736] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.153 ************************************ 00:16:38.153 END TEST raid_state_function_test 00:16:38.153 ************************************ 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:38.153 00:16:38.153 real 0m12.925s 00:16:38.153 user 0m22.828s 00:16:38.153 sys 0m1.534s 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.153 13:59:26 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:38.153 13:59:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.153 13:59:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.153 13:59:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.153 ************************************ 00:16:38.153 START TEST raid_state_function_test_sb 00:16:38.153 ************************************ 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=122037 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 122037' 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:38.153 Process raid pid: 122037 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 122037 /var/tmp/spdk-raid.sock 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 122037 ']' 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.153 13:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.153 [2024-07-25 13:59:26.970468] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:38.153 [2024-07-25 13:59:26.971023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.153 [2024-07-25 13:59:27.147208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.412 [2024-07-25 13:59:27.412582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.670 [2024-07-25 13:59:27.660925] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.235 13:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.235 13:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:39.235 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:39.494 [2024-07-25 13:59:28.311416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.494 [2024-07-25 13:59:28.311882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.494 [2024-07-25 13:59:28.312055] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.494 [2024-07-25 13:59:28.312148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.494 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.752 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.752 "name": "Existed_Raid", 00:16:39.752 "uuid": "7669eb49-d09a-4061-a6c1-7295df016ee8", 00:16:39.752 "strip_size_kb": 64, 00:16:39.752 "state": "configuring", 00:16:39.752 "raid_level": "concat", 00:16:39.752 "superblock": true, 00:16:39.752 "num_base_bdevs": 2, 00:16:39.752 "num_base_bdevs_discovered": 0, 00:16:39.752 "num_base_bdevs_operational": 2, 00:16:39.752 "base_bdevs_list": [ 00:16:39.752 { 00:16:39.752 "name": "BaseBdev1", 00:16:39.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.752 "is_configured": false, 00:16:39.752 "data_offset": 0, 00:16:39.752 "data_size": 0 00:16:39.752 }, 00:16:39.752 { 00:16:39.752 "name": "BaseBdev2", 00:16:39.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.752 "is_configured": false, 00:16:39.752 "data_offset": 0, 00:16:39.752 "data_size": 0 00:16:39.752 } 00:16:39.752 ] 00:16:39.752 }' 00:16:39.752 13:59:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.752 13:59:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.318 13:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.581 [2024-07-25 13:59:29.447591] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.581 [2024-07-25 13:59:29.447955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:16:40.581 13:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:40.842 [2024-07-25 13:59:29.755704] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.842 [2024-07-25 13:59:29.756077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.842 [2024-07-25 13:59:29.756225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.842 [2024-07-25 13:59:29.756298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.842 13:59:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.100 [2024-07-25 13:59:30.033236] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.100 BaseBdev1 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:41.100 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.358 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.616 [ 00:16:41.616 { 00:16:41.616 "name": "BaseBdev1", 00:16:41.616 "aliases": [ 00:16:41.616 "16558cfb-90af-4394-975e-f16dcfd9c2a8" 00:16:41.616 ], 00:16:41.616 "product_name": "Malloc disk", 00:16:41.616 "block_size": 512, 00:16:41.616 "num_blocks": 65536, 00:16:41.616 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:41.616 "assigned_rate_limits": { 00:16:41.616 "rw_ios_per_sec": 0, 00:16:41.616 "rw_mbytes_per_sec": 0, 00:16:41.616 "r_mbytes_per_sec": 0, 00:16:41.616 "w_mbytes_per_sec": 0 00:16:41.616 }, 00:16:41.616 "claimed": true, 00:16:41.616 "claim_type": "exclusive_write", 00:16:41.616 "zoned": false, 00:16:41.616 "supported_io_types": { 00:16:41.616 "read": true, 00:16:41.616 "write": true, 00:16:41.616 "unmap": true, 00:16:41.616 "flush": true, 00:16:41.616 "reset": true, 00:16:41.616 "nvme_admin": false, 00:16:41.616 "nvme_io": false, 00:16:41.616 "nvme_io_md": false, 00:16:41.616 "write_zeroes": true, 00:16:41.616 "zcopy": true, 00:16:41.616 "get_zone_info": false, 00:16:41.616 "zone_management": false, 00:16:41.616 "zone_append": false, 00:16:41.616 "compare": false, 00:16:41.616 "compare_and_write": false, 00:16:41.616 "abort": true, 00:16:41.616 "seek_hole": false, 00:16:41.616 "seek_data": false, 00:16:41.616 "copy": true, 00:16:41.616 "nvme_iov_md": false 00:16:41.616 }, 00:16:41.616 "memory_domains": [ 00:16:41.616 { 00:16:41.616 "dma_device_id": "system", 00:16:41.616 "dma_device_type": 1 00:16:41.616 }, 00:16:41.616 { 00:16:41.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.616 "dma_device_type": 2 00:16:41.616 } 00:16:41.616 ], 00:16:41.616 "driver_specific": {} 00:16:41.616 } 00:16:41.616 ] 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.616 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.874 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.874 "name": "Existed_Raid", 00:16:41.874 "uuid": "257155fb-e3da-42c5-b088-7125d932053c", 00:16:41.874 "strip_size_kb": 64, 00:16:41.874 "state": "configuring", 00:16:41.874 "raid_level": "concat", 00:16:41.874 "superblock": true, 00:16:41.874 "num_base_bdevs": 2, 00:16:41.874 "num_base_bdevs_discovered": 1, 00:16:41.874 "num_base_bdevs_operational": 2, 00:16:41.874 "base_bdevs_list": [ 00:16:41.874 { 00:16:41.874 "name": "BaseBdev1", 00:16:41.874 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:41.874 "is_configured": true, 00:16:41.874 "data_offset": 2048, 00:16:41.874 "data_size": 63488 00:16:41.874 }, 00:16:41.874 { 00:16:41.874 "name": "BaseBdev2", 00:16:41.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.874 "is_configured": false, 00:16:41.874 "data_offset": 0, 00:16:41.874 "data_size": 0 00:16:41.874 } 00:16:41.874 ] 00:16:41.874 }' 00:16:41.874 13:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.874 13:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.807 13:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.807 [2024-07-25 13:59:31.709735] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.807 [2024-07-25 13:59:31.710124] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:16:42.807 13:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:43.065 [2024-07-25 13:59:31.985869] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.065 [2024-07-25 13:59:31.988336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.065 [2024-07-25 13:59:31.988541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.065 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.325 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.325 "name": "Existed_Raid", 00:16:43.325 "uuid": "3c694f2d-9322-4922-ae8c-06e365250a11", 00:16:43.325 "strip_size_kb": 64, 00:16:43.325 "state": "configuring", 00:16:43.325 "raid_level": "concat", 00:16:43.325 "superblock": true, 00:16:43.325 "num_base_bdevs": 2, 00:16:43.325 "num_base_bdevs_discovered": 1, 00:16:43.325 "num_base_bdevs_operational": 2, 00:16:43.325 "base_bdevs_list": [ 00:16:43.325 { 00:16:43.325 "name": "BaseBdev1", 00:16:43.325 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:43.325 "is_configured": true, 00:16:43.325 "data_offset": 2048, 00:16:43.325 "data_size": 63488 00:16:43.325 }, 00:16:43.325 { 00:16:43.325 "name": "BaseBdev2", 00:16:43.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.325 "is_configured": false, 00:16:43.325 "data_offset": 0, 00:16:43.325 "data_size": 0 00:16:43.325 } 00:16:43.325 ] 00:16:43.325 }' 00:16:43.325 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.325 13:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.892 13:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.150 [2024-07-25 13:59:33.170613] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.150 [2024-07-25 13:59:33.171209] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:44.150 [2024-07-25 13:59:33.171352] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:44.150 [2024-07-25 13:59:33.171532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:44.150 [2024-07-25 13:59:33.171953] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:44.150 [2024-07-25 13:59:33.172102] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:16:44.150 BaseBdev2 00:16:44.150 [2024-07-25 13:59:33.172376] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.150 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.418 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.683 [ 00:16:44.683 { 00:16:44.683 "name": "BaseBdev2", 00:16:44.683 "aliases": [ 00:16:44.683 "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7" 00:16:44.683 ], 00:16:44.683 "product_name": "Malloc disk", 00:16:44.683 "block_size": 512, 00:16:44.683 "num_blocks": 65536, 00:16:44.683 "uuid": "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7", 00:16:44.683 "assigned_rate_limits": { 00:16:44.683 "rw_ios_per_sec": 0, 00:16:44.683 "rw_mbytes_per_sec": 0, 00:16:44.683 "r_mbytes_per_sec": 0, 00:16:44.683 "w_mbytes_per_sec": 0 00:16:44.683 }, 00:16:44.683 "claimed": true, 00:16:44.683 "claim_type": "exclusive_write", 00:16:44.683 "zoned": false, 00:16:44.683 "supported_io_types": { 00:16:44.683 "read": true, 00:16:44.683 "write": true, 00:16:44.683 "unmap": true, 00:16:44.683 "flush": true, 00:16:44.683 "reset": true, 00:16:44.683 "nvme_admin": false, 00:16:44.683 "nvme_io": false, 00:16:44.683 "nvme_io_md": false, 00:16:44.683 "write_zeroes": true, 00:16:44.683 "zcopy": true, 00:16:44.683 "get_zone_info": false, 00:16:44.683 "zone_management": false, 00:16:44.683 "zone_append": false, 00:16:44.683 "compare": false, 00:16:44.683 "compare_and_write": false, 00:16:44.683 "abort": true, 00:16:44.683 "seek_hole": false, 00:16:44.683 "seek_data": false, 00:16:44.683 "copy": true, 00:16:44.683 "nvme_iov_md": false 00:16:44.683 }, 00:16:44.683 "memory_domains": [ 00:16:44.683 { 00:16:44.683 "dma_device_id": "system", 00:16:44.683 "dma_device_type": 1 00:16:44.683 }, 00:16:44.683 { 00:16:44.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.683 "dma_device_type": 2 00:16:44.683 } 00:16:44.683 ], 00:16:44.683 "driver_specific": {} 00:16:44.683 } 00:16:44.683 ] 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.683 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.942 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.942 "name": "Existed_Raid", 00:16:44.942 "uuid": "3c694f2d-9322-4922-ae8c-06e365250a11", 00:16:44.942 "strip_size_kb": 64, 00:16:44.942 "state": "online", 00:16:44.942 "raid_level": "concat", 00:16:44.942 "superblock": true, 00:16:44.942 "num_base_bdevs": 2, 00:16:44.942 "num_base_bdevs_discovered": 2, 00:16:44.942 "num_base_bdevs_operational": 2, 00:16:44.942 "base_bdevs_list": [ 00:16:44.942 { 00:16:44.942 "name": "BaseBdev1", 00:16:44.942 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:44.942 "is_configured": true, 00:16:44.942 "data_offset": 2048, 00:16:44.942 "data_size": 63488 00:16:44.942 }, 00:16:44.942 { 00:16:44.942 "name": "BaseBdev2", 00:16:44.942 "uuid": "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7", 00:16:44.942 "is_configured": true, 00:16:44.942 "data_offset": 2048, 00:16:44.942 "data_size": 63488 00:16:44.942 } 00:16:44.942 ] 00:16:44.942 }' 00:16:44.942 13:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.942 13:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:45.511 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:46.078 [2024-07-25 13:59:34.819382] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.078 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:46.078 "name": "Existed_Raid", 00:16:46.078 "aliases": [ 00:16:46.078 "3c694f2d-9322-4922-ae8c-06e365250a11" 00:16:46.078 ], 00:16:46.078 "product_name": "Raid Volume", 00:16:46.078 "block_size": 512, 00:16:46.078 "num_blocks": 126976, 00:16:46.078 "uuid": "3c694f2d-9322-4922-ae8c-06e365250a11", 00:16:46.078 "assigned_rate_limits": { 00:16:46.078 "rw_ios_per_sec": 0, 00:16:46.078 "rw_mbytes_per_sec": 0, 00:16:46.078 "r_mbytes_per_sec": 0, 00:16:46.078 "w_mbytes_per_sec": 0 00:16:46.078 }, 00:16:46.078 "claimed": false, 00:16:46.078 "zoned": false, 00:16:46.078 "supported_io_types": { 00:16:46.079 "read": true, 00:16:46.079 "write": true, 00:16:46.079 "unmap": true, 00:16:46.079 "flush": true, 00:16:46.079 "reset": true, 00:16:46.079 "nvme_admin": false, 00:16:46.079 "nvme_io": false, 00:16:46.079 "nvme_io_md": false, 00:16:46.079 "write_zeroes": true, 00:16:46.079 "zcopy": false, 00:16:46.079 "get_zone_info": false, 00:16:46.079 "zone_management": false, 00:16:46.079 "zone_append": false, 00:16:46.079 "compare": false, 00:16:46.079 "compare_and_write": false, 00:16:46.079 "abort": false, 00:16:46.079 "seek_hole": false, 00:16:46.079 "seek_data": false, 00:16:46.079 "copy": false, 00:16:46.079 "nvme_iov_md": false 00:16:46.079 }, 00:16:46.079 "memory_domains": [ 00:16:46.079 { 00:16:46.079 "dma_device_id": "system", 00:16:46.079 "dma_device_type": 1 00:16:46.079 }, 00:16:46.079 { 00:16:46.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.079 "dma_device_type": 2 00:16:46.079 }, 00:16:46.079 { 00:16:46.079 "dma_device_id": "system", 00:16:46.079 "dma_device_type": 1 00:16:46.079 }, 00:16:46.079 { 00:16:46.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.079 "dma_device_type": 2 00:16:46.079 } 00:16:46.079 ], 00:16:46.079 "driver_specific": { 00:16:46.079 "raid": { 00:16:46.079 "uuid": "3c694f2d-9322-4922-ae8c-06e365250a11", 00:16:46.079 "strip_size_kb": 64, 00:16:46.079 "state": "online", 00:16:46.079 "raid_level": "concat", 00:16:46.079 "superblock": true, 00:16:46.079 "num_base_bdevs": 2, 00:16:46.079 "num_base_bdevs_discovered": 2, 00:16:46.079 "num_base_bdevs_operational": 2, 00:16:46.079 "base_bdevs_list": [ 00:16:46.079 { 00:16:46.079 "name": "BaseBdev1", 00:16:46.079 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:46.079 "is_configured": true, 00:16:46.079 "data_offset": 2048, 00:16:46.079 "data_size": 63488 00:16:46.079 }, 00:16:46.079 { 00:16:46.079 "name": "BaseBdev2", 00:16:46.079 "uuid": "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7", 00:16:46.079 "is_configured": true, 00:16:46.079 "data_offset": 2048, 00:16:46.079 "data_size": 63488 00:16:46.079 } 00:16:46.079 ] 00:16:46.079 } 00:16:46.079 } 00:16:46.079 }' 00:16:46.079 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.079 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:46.079 BaseBdev2' 00:16:46.079 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.079 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:46.079 13:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.338 "name": "BaseBdev1", 00:16:46.338 "aliases": [ 00:16:46.338 "16558cfb-90af-4394-975e-f16dcfd9c2a8" 00:16:46.338 ], 00:16:46.338 "product_name": "Malloc disk", 00:16:46.338 "block_size": 512, 00:16:46.338 "num_blocks": 65536, 00:16:46.338 "uuid": "16558cfb-90af-4394-975e-f16dcfd9c2a8", 00:16:46.338 "assigned_rate_limits": { 00:16:46.338 "rw_ios_per_sec": 0, 00:16:46.338 "rw_mbytes_per_sec": 0, 00:16:46.338 "r_mbytes_per_sec": 0, 00:16:46.338 "w_mbytes_per_sec": 0 00:16:46.338 }, 00:16:46.338 "claimed": true, 00:16:46.338 "claim_type": "exclusive_write", 00:16:46.338 "zoned": false, 00:16:46.338 "supported_io_types": { 00:16:46.338 "read": true, 00:16:46.338 "write": true, 00:16:46.338 "unmap": true, 00:16:46.338 "flush": true, 00:16:46.338 "reset": true, 00:16:46.338 "nvme_admin": false, 00:16:46.338 "nvme_io": false, 00:16:46.338 "nvme_io_md": false, 00:16:46.338 "write_zeroes": true, 00:16:46.338 "zcopy": true, 00:16:46.338 "get_zone_info": false, 00:16:46.338 "zone_management": false, 00:16:46.338 "zone_append": false, 00:16:46.338 "compare": false, 00:16:46.338 "compare_and_write": false, 00:16:46.338 "abort": true, 00:16:46.338 "seek_hole": false, 00:16:46.338 "seek_data": false, 00:16:46.338 "copy": true, 00:16:46.338 "nvme_iov_md": false 00:16:46.338 }, 00:16:46.338 "memory_domains": [ 00:16:46.338 { 00:16:46.338 "dma_device_id": "system", 00:16:46.338 "dma_device_type": 1 00:16:46.338 }, 00:16:46.338 { 00:16:46.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.338 "dma_device_type": 2 00:16:46.338 } 00:16:46.338 ], 00:16:46.338 "driver_specific": {} 00:16:46.338 }' 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.338 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:46.597 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.856 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.856 "name": "BaseBdev2", 00:16:46.856 "aliases": [ 00:16:46.856 "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7" 00:16:46.856 ], 00:16:46.856 "product_name": "Malloc disk", 00:16:46.856 "block_size": 512, 00:16:46.856 "num_blocks": 65536, 00:16:46.856 "uuid": "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7", 00:16:46.856 "assigned_rate_limits": { 00:16:46.857 "rw_ios_per_sec": 0, 00:16:46.857 "rw_mbytes_per_sec": 0, 00:16:46.857 "r_mbytes_per_sec": 0, 00:16:46.857 "w_mbytes_per_sec": 0 00:16:46.857 }, 00:16:46.857 "claimed": true, 00:16:46.857 "claim_type": "exclusive_write", 00:16:46.857 "zoned": false, 00:16:46.857 "supported_io_types": { 00:16:46.857 "read": true, 00:16:46.857 "write": true, 00:16:46.857 "unmap": true, 00:16:46.857 "flush": true, 00:16:46.857 "reset": true, 00:16:46.857 "nvme_admin": false, 00:16:46.857 "nvme_io": false, 00:16:46.857 "nvme_io_md": false, 00:16:46.857 "write_zeroes": true, 00:16:46.857 "zcopy": true, 00:16:46.857 "get_zone_info": false, 00:16:46.857 "zone_management": false, 00:16:46.857 "zone_append": false, 00:16:46.857 "compare": false, 00:16:46.857 "compare_and_write": false, 00:16:46.857 "abort": true, 00:16:46.857 "seek_hole": false, 00:16:46.857 "seek_data": false, 00:16:46.857 "copy": true, 00:16:46.857 "nvme_iov_md": false 00:16:46.857 }, 00:16:46.857 "memory_domains": [ 00:16:46.857 { 00:16:46.857 "dma_device_id": "system", 00:16:46.857 "dma_device_type": 1 00:16:46.857 }, 00:16:46.857 { 00:16:46.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.857 "dma_device_type": 2 00:16:46.857 } 00:16:46.857 ], 00:16:46.857 "driver_specific": {} 00:16:46.857 }' 00:16:46.857 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.115 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.115 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:47.115 13:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.115 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.373 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.373 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.373 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:47.632 [2024-07-25 13:59:36.514495] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.632 [2024-07-25 13:59:36.514836] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.632 [2024-07-25 13:59:36.515036] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.632 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:47.632 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.633 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.891 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.891 "name": "Existed_Raid", 00:16:47.892 "uuid": "3c694f2d-9322-4922-ae8c-06e365250a11", 00:16:47.892 "strip_size_kb": 64, 00:16:47.892 "state": "offline", 00:16:47.892 "raid_level": "concat", 00:16:47.892 "superblock": true, 00:16:47.892 "num_base_bdevs": 2, 00:16:47.892 "num_base_bdevs_discovered": 1, 00:16:47.892 "num_base_bdevs_operational": 1, 00:16:47.892 "base_bdevs_list": [ 00:16:47.892 { 00:16:47.892 "name": null, 00:16:47.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.892 "is_configured": false, 00:16:47.892 "data_offset": 2048, 00:16:47.892 "data_size": 63488 00:16:47.892 }, 00:16:47.892 { 00:16:47.892 "name": "BaseBdev2", 00:16:47.892 "uuid": "9ab13176-d0ec-4c8f-8cb6-361c2836e1e7", 00:16:47.892 "is_configured": true, 00:16:47.892 "data_offset": 2048, 00:16:47.892 "data_size": 63488 00:16:47.892 } 00:16:47.892 ] 00:16:47.892 }' 00:16:47.892 13:59:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.892 13:59:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.459 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:48.459 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:48.718 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.718 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:48.976 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:48.976 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.976 13:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:49.234 [2024-07-25 13:59:38.116365] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.234 [2024-07-25 13:59:38.116740] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:16:49.234 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:49.234 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:49.235 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.235 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 122037 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 122037 ']' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 122037 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122037 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122037' 00:16:49.493 killing process with pid 122037 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 122037 00:16:49.493 13:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 122037 00:16:49.493 [2024-07-25 13:59:38.492285] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.493 [2024-07-25 13:59:38.492405] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.940 ************************************ 00:16:50.940 END TEST raid_state_function_test_sb 00:16:50.940 ************************************ 00:16:50.940 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:50.940 00:16:50.940 real 0m12.734s 00:16:50.940 user 0m22.530s 00:16:50.940 sys 0m1.499s 00:16:50.940 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.940 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.940 13:59:39 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:50.940 13:59:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:50.940 13:59:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.940 13:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.940 ************************************ 00:16:50.940 START TEST raid_superblock_test 00:16:50.940 ************************************ 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=122426 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 122426 /var/tmp/spdk-raid.sock 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 122426 ']' 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.940 13:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.940 [2024-07-25 13:59:39.758137] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:16:50.940 [2024-07-25 13:59:39.758616] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122426 ] 00:16:50.940 [2024-07-25 13:59:39.932958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.199 [2024-07-25 13:59:40.181316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.458 [2024-07-25 13:59:40.380265] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:52.025 13:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:52.283 malloc1 00:16:52.283 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:52.541 [2024-07-25 13:59:41.353172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:52.541 [2024-07-25 13:59:41.353628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.541 [2024-07-25 13:59:41.353845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:52.541 [2024-07-25 13:59:41.354022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.541 [2024-07-25 13:59:41.356931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.541 [2024-07-25 13:59:41.357128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:52.541 pt1 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:52.541 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:52.798 malloc2 00:16:52.799 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.056 [2024-07-25 13:59:41.870699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.056 [2024-07-25 13:59:41.871095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.056 [2024-07-25 13:59:41.871291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:53.056 [2024-07-25 13:59:41.871457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.056 [2024-07-25 13:59:41.874219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.056 [2024-07-25 13:59:41.874424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.056 pt2 00:16:53.056 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:53.056 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:53.056 13:59:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:53.313 [2024-07-25 13:59:42.134926] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.313 [2024-07-25 13:59:42.137530] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.313 [2024-07-25 13:59:42.137954] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:16:53.313 [2024-07-25 13:59:42.138096] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:53.313 [2024-07-25 13:59:42.138425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:53.313 [2024-07-25 13:59:42.138993] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:16:53.313 [2024-07-25 13:59:42.139130] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:16:53.313 [2024-07-25 13:59:42.139570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.313 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.571 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.571 "name": "raid_bdev1", 00:16:53.571 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:16:53.571 "strip_size_kb": 64, 00:16:53.571 "state": "online", 00:16:53.571 "raid_level": "concat", 00:16:53.571 "superblock": true, 00:16:53.571 "num_base_bdevs": 2, 00:16:53.571 "num_base_bdevs_discovered": 2, 00:16:53.571 "num_base_bdevs_operational": 2, 00:16:53.571 "base_bdevs_list": [ 00:16:53.571 { 00:16:53.571 "name": "pt1", 00:16:53.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.571 "is_configured": true, 00:16:53.571 "data_offset": 2048, 00:16:53.571 "data_size": 63488 00:16:53.571 }, 00:16:53.571 { 00:16:53.571 "name": "pt2", 00:16:53.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.571 "is_configured": true, 00:16:53.571 "data_offset": 2048, 00:16:53.571 "data_size": 63488 00:16:53.571 } 00:16:53.571 ] 00:16:53.571 }' 00:16:53.571 13:59:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.571 13:59:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:54.138 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:54.397 [2024-07-25 13:59:43.308125] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.397 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:54.397 "name": "raid_bdev1", 00:16:54.397 "aliases": [ 00:16:54.397 "e6ee73da-8561-488b-a2ce-bb0f43bc13f9" 00:16:54.397 ], 00:16:54.397 "product_name": "Raid Volume", 00:16:54.397 "block_size": 512, 00:16:54.397 "num_blocks": 126976, 00:16:54.397 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:16:54.397 "assigned_rate_limits": { 00:16:54.397 "rw_ios_per_sec": 0, 00:16:54.397 "rw_mbytes_per_sec": 0, 00:16:54.397 "r_mbytes_per_sec": 0, 00:16:54.397 "w_mbytes_per_sec": 0 00:16:54.397 }, 00:16:54.397 "claimed": false, 00:16:54.397 "zoned": false, 00:16:54.397 "supported_io_types": { 00:16:54.397 "read": true, 00:16:54.397 "write": true, 00:16:54.397 "unmap": true, 00:16:54.397 "flush": true, 00:16:54.397 "reset": true, 00:16:54.397 "nvme_admin": false, 00:16:54.397 "nvme_io": false, 00:16:54.397 "nvme_io_md": false, 00:16:54.397 "write_zeroes": true, 00:16:54.397 "zcopy": false, 00:16:54.397 "get_zone_info": false, 00:16:54.397 "zone_management": false, 00:16:54.397 "zone_append": false, 00:16:54.397 "compare": false, 00:16:54.397 "compare_and_write": false, 00:16:54.397 "abort": false, 00:16:54.397 "seek_hole": false, 00:16:54.397 "seek_data": false, 00:16:54.397 "copy": false, 00:16:54.397 "nvme_iov_md": false 00:16:54.397 }, 00:16:54.397 "memory_domains": [ 00:16:54.397 { 00:16:54.397 "dma_device_id": "system", 00:16:54.397 "dma_device_type": 1 00:16:54.397 }, 00:16:54.397 { 00:16:54.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.397 "dma_device_type": 2 00:16:54.397 }, 00:16:54.397 { 00:16:54.397 "dma_device_id": "system", 00:16:54.397 "dma_device_type": 1 00:16:54.397 }, 00:16:54.397 { 00:16:54.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.397 "dma_device_type": 2 00:16:54.397 } 00:16:54.398 ], 00:16:54.398 "driver_specific": { 00:16:54.398 "raid": { 00:16:54.398 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:16:54.398 "strip_size_kb": 64, 00:16:54.398 "state": "online", 00:16:54.398 "raid_level": "concat", 00:16:54.398 "superblock": true, 00:16:54.398 "num_base_bdevs": 2, 00:16:54.398 "num_base_bdevs_discovered": 2, 00:16:54.398 "num_base_bdevs_operational": 2, 00:16:54.398 "base_bdevs_list": [ 00:16:54.398 { 00:16:54.398 "name": "pt1", 00:16:54.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.398 "is_configured": true, 00:16:54.398 "data_offset": 2048, 00:16:54.398 "data_size": 63488 00:16:54.398 }, 00:16:54.398 { 00:16:54.398 "name": "pt2", 00:16:54.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.398 "is_configured": true, 00:16:54.398 "data_offset": 2048, 00:16:54.398 "data_size": 63488 00:16:54.398 } 00:16:54.398 ] 00:16:54.398 } 00:16:54.398 } 00:16:54.398 }' 00:16:54.398 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.398 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:54.398 pt2' 00:16:54.398 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.398 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:54.398 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.656 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.656 "name": "pt1", 00:16:54.656 "aliases": [ 00:16:54.656 "00000000-0000-0000-0000-000000000001" 00:16:54.656 ], 00:16:54.656 "product_name": "passthru", 00:16:54.656 "block_size": 512, 00:16:54.656 "num_blocks": 65536, 00:16:54.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.656 "assigned_rate_limits": { 00:16:54.656 "rw_ios_per_sec": 0, 00:16:54.656 "rw_mbytes_per_sec": 0, 00:16:54.656 "r_mbytes_per_sec": 0, 00:16:54.656 "w_mbytes_per_sec": 0 00:16:54.656 }, 00:16:54.656 "claimed": true, 00:16:54.656 "claim_type": "exclusive_write", 00:16:54.656 "zoned": false, 00:16:54.656 "supported_io_types": { 00:16:54.656 "read": true, 00:16:54.656 "write": true, 00:16:54.656 "unmap": true, 00:16:54.656 "flush": true, 00:16:54.656 "reset": true, 00:16:54.656 "nvme_admin": false, 00:16:54.656 "nvme_io": false, 00:16:54.656 "nvme_io_md": false, 00:16:54.656 "write_zeroes": true, 00:16:54.656 "zcopy": true, 00:16:54.656 "get_zone_info": false, 00:16:54.656 "zone_management": false, 00:16:54.656 "zone_append": false, 00:16:54.656 "compare": false, 00:16:54.656 "compare_and_write": false, 00:16:54.656 "abort": true, 00:16:54.656 "seek_hole": false, 00:16:54.656 "seek_data": false, 00:16:54.656 "copy": true, 00:16:54.656 "nvme_iov_md": false 00:16:54.656 }, 00:16:54.656 "memory_domains": [ 00:16:54.656 { 00:16:54.656 "dma_device_id": "system", 00:16:54.656 "dma_device_type": 1 00:16:54.656 }, 00:16:54.656 { 00:16:54.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.656 "dma_device_type": 2 00:16:54.656 } 00:16:54.656 ], 00:16:54.656 "driver_specific": { 00:16:54.656 "passthru": { 00:16:54.656 "name": "pt1", 00:16:54.656 "base_bdev_name": "malloc1" 00:16:54.656 } 00:16:54.656 } 00:16:54.656 }' 00:16:54.656 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.656 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.970 13:59:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.228 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.228 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.228 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:55.228 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.487 "name": "pt2", 00:16:55.487 "aliases": [ 00:16:55.487 "00000000-0000-0000-0000-000000000002" 00:16:55.487 ], 00:16:55.487 "product_name": "passthru", 00:16:55.487 "block_size": 512, 00:16:55.487 "num_blocks": 65536, 00:16:55.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.487 "assigned_rate_limits": { 00:16:55.487 "rw_ios_per_sec": 0, 00:16:55.487 "rw_mbytes_per_sec": 0, 00:16:55.487 "r_mbytes_per_sec": 0, 00:16:55.487 "w_mbytes_per_sec": 0 00:16:55.487 }, 00:16:55.487 "claimed": true, 00:16:55.487 "claim_type": "exclusive_write", 00:16:55.487 "zoned": false, 00:16:55.487 "supported_io_types": { 00:16:55.487 "read": true, 00:16:55.487 "write": true, 00:16:55.487 "unmap": true, 00:16:55.487 "flush": true, 00:16:55.487 "reset": true, 00:16:55.487 "nvme_admin": false, 00:16:55.487 "nvme_io": false, 00:16:55.487 "nvme_io_md": false, 00:16:55.487 "write_zeroes": true, 00:16:55.487 "zcopy": true, 00:16:55.487 "get_zone_info": false, 00:16:55.487 "zone_management": false, 00:16:55.487 "zone_append": false, 00:16:55.487 "compare": false, 00:16:55.487 "compare_and_write": false, 00:16:55.487 "abort": true, 00:16:55.487 "seek_hole": false, 00:16:55.487 "seek_data": false, 00:16:55.487 "copy": true, 00:16:55.487 "nvme_iov_md": false 00:16:55.487 }, 00:16:55.487 "memory_domains": [ 00:16:55.487 { 00:16:55.487 "dma_device_id": "system", 00:16:55.487 "dma_device_type": 1 00:16:55.487 }, 00:16:55.487 { 00:16:55.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.487 "dma_device_type": 2 00:16:55.487 } 00:16:55.487 ], 00:16:55.487 "driver_specific": { 00:16:55.487 "passthru": { 00:16:55.487 "name": "pt2", 00:16:55.487 "base_bdev_name": "malloc2" 00:16:55.487 } 00:16:55.487 } 00:16:55.487 }' 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.487 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:55.745 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:56.004 [2024-07-25 13:59:44.932469] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.004 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e6ee73da-8561-488b-a2ce-bb0f43bc13f9 00:16:56.004 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z e6ee73da-8561-488b-a2ce-bb0f43bc13f9 ']' 00:16:56.004 13:59:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:56.262 [2024-07-25 13:59:45.220231] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.262 [2024-07-25 13:59:45.220591] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.262 [2024-07-25 13:59:45.220885] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.262 [2024-07-25 13:59:45.221116] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.262 [2024-07-25 13:59:45.221254] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:16:56.262 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.262 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:56.519 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:56.519 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:56.519 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:56.519 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:56.778 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:56.778 13:59:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:57.036 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:57.036 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:57.294 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.552 [2024-07-25 13:59:46.488495] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:57.552 [2024-07-25 13:59:46.491053] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:57.552 [2024-07-25 13:59:46.491298] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:57.552 [2024-07-25 13:59:46.491580] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:57.552 [2024-07-25 13:59:46.491769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.552 [2024-07-25 13:59:46.491916] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:16:57.552 request: 00:16:57.552 { 00:16:57.552 "name": "raid_bdev1", 00:16:57.552 "raid_level": "concat", 00:16:57.552 "base_bdevs": [ 00:16:57.552 "malloc1", 00:16:57.552 "malloc2" 00:16:57.552 ], 00:16:57.552 "strip_size_kb": 64, 00:16:57.552 "superblock": false, 00:16:57.552 "method": "bdev_raid_create", 00:16:57.552 "req_id": 1 00:16:57.552 } 00:16:57.552 Got JSON-RPC error response 00:16:57.552 response: 00:16:57.552 { 00:16:57.552 "code": -17, 00:16:57.552 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:57.552 } 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.552 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:57.810 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:57.810 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:57.810 13:59:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.068 [2024-07-25 13:59:46.996726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.068 [2024-07-25 13:59:46.997159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.068 [2024-07-25 13:59:46.997360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:58.068 [2024-07-25 13:59:46.997544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.068 [2024-07-25 13:59:47.000327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.068 [2024-07-25 13:59:47.000556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.068 [2024-07-25 13:59:47.000853] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.068 [2024-07-25 13:59:47.001051] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.068 pt1 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.068 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.326 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.326 "name": "raid_bdev1", 00:16:58.326 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:16:58.326 "strip_size_kb": 64, 00:16:58.326 "state": "configuring", 00:16:58.326 "raid_level": "concat", 00:16:58.326 "superblock": true, 00:16:58.326 "num_base_bdevs": 2, 00:16:58.326 "num_base_bdevs_discovered": 1, 00:16:58.326 "num_base_bdevs_operational": 2, 00:16:58.326 "base_bdevs_list": [ 00:16:58.326 { 00:16:58.326 "name": "pt1", 00:16:58.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.326 "is_configured": true, 00:16:58.326 "data_offset": 2048, 00:16:58.326 "data_size": 63488 00:16:58.326 }, 00:16:58.326 { 00:16:58.326 "name": null, 00:16:58.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.327 "is_configured": false, 00:16:58.327 "data_offset": 2048, 00:16:58.327 "data_size": 63488 00:16:58.327 } 00:16:58.327 ] 00:16:58.327 }' 00:16:58.327 13:59:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.327 13:59:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.260 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:16:59.260 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:59.260 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:59.260 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.260 [2024-07-25 13:59:48.285256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.260 [2024-07-25 13:59:48.285666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.260 [2024-07-25 13:59:48.285845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:59.260 [2024-07-25 13:59:48.285983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.260 [2024-07-25 13:59:48.286575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.260 [2024-07-25 13:59:48.286742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.260 [2024-07-25 13:59:48.286969] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.260 [2024-07-25 13:59:48.287105] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.260 [2024-07-25 13:59:48.287282] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:59.260 [2024-07-25 13:59:48.287395] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:59.260 [2024-07-25 13:59:48.287546] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:59.260 [2024-07-25 13:59:48.287940] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:59.260 [2024-07-25 13:59:48.288068] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:59.260 [2024-07-25 13:59:48.288341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.260 pt2 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.519 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.520 "name": "raid_bdev1", 00:16:59.520 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:16:59.520 "strip_size_kb": 64, 00:16:59.520 "state": "online", 00:16:59.520 "raid_level": "concat", 00:16:59.520 "superblock": true, 00:16:59.520 "num_base_bdevs": 2, 00:16:59.520 "num_base_bdevs_discovered": 2, 00:16:59.520 "num_base_bdevs_operational": 2, 00:16:59.520 "base_bdevs_list": [ 00:16:59.520 { 00:16:59.520 "name": "pt1", 00:16:59.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.520 "is_configured": true, 00:16:59.520 "data_offset": 2048, 00:16:59.520 "data_size": 63488 00:16:59.520 }, 00:16:59.520 { 00:16:59.520 "name": "pt2", 00:16:59.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.520 "is_configured": true, 00:16:59.520 "data_offset": 2048, 00:16:59.520 "data_size": 63488 00:16:59.520 } 00:16:59.520 ] 00:16:59.520 }' 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.520 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:00.456 [2024-07-25 13:59:49.477830] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:00.456 "name": "raid_bdev1", 00:17:00.456 "aliases": [ 00:17:00.456 "e6ee73da-8561-488b-a2ce-bb0f43bc13f9" 00:17:00.456 ], 00:17:00.456 "product_name": "Raid Volume", 00:17:00.456 "block_size": 512, 00:17:00.456 "num_blocks": 126976, 00:17:00.456 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:17:00.456 "assigned_rate_limits": { 00:17:00.456 "rw_ios_per_sec": 0, 00:17:00.456 "rw_mbytes_per_sec": 0, 00:17:00.456 "r_mbytes_per_sec": 0, 00:17:00.456 "w_mbytes_per_sec": 0 00:17:00.456 }, 00:17:00.456 "claimed": false, 00:17:00.456 "zoned": false, 00:17:00.456 "supported_io_types": { 00:17:00.456 "read": true, 00:17:00.456 "write": true, 00:17:00.456 "unmap": true, 00:17:00.456 "flush": true, 00:17:00.456 "reset": true, 00:17:00.456 "nvme_admin": false, 00:17:00.456 "nvme_io": false, 00:17:00.456 "nvme_io_md": false, 00:17:00.456 "write_zeroes": true, 00:17:00.456 "zcopy": false, 00:17:00.456 "get_zone_info": false, 00:17:00.456 "zone_management": false, 00:17:00.456 "zone_append": false, 00:17:00.456 "compare": false, 00:17:00.456 "compare_and_write": false, 00:17:00.456 "abort": false, 00:17:00.456 "seek_hole": false, 00:17:00.456 "seek_data": false, 00:17:00.456 "copy": false, 00:17:00.456 "nvme_iov_md": false 00:17:00.456 }, 00:17:00.456 "memory_domains": [ 00:17:00.456 { 00:17:00.456 "dma_device_id": "system", 00:17:00.456 "dma_device_type": 1 00:17:00.456 }, 00:17:00.456 { 00:17:00.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.456 "dma_device_type": 2 00:17:00.456 }, 00:17:00.456 { 00:17:00.456 "dma_device_id": "system", 00:17:00.456 "dma_device_type": 1 00:17:00.456 }, 00:17:00.456 { 00:17:00.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.456 "dma_device_type": 2 00:17:00.456 } 00:17:00.456 ], 00:17:00.456 "driver_specific": { 00:17:00.456 "raid": { 00:17:00.456 "uuid": "e6ee73da-8561-488b-a2ce-bb0f43bc13f9", 00:17:00.456 "strip_size_kb": 64, 00:17:00.456 "state": "online", 00:17:00.456 "raid_level": "concat", 00:17:00.456 "superblock": true, 00:17:00.456 "num_base_bdevs": 2, 00:17:00.456 "num_base_bdevs_discovered": 2, 00:17:00.456 "num_base_bdevs_operational": 2, 00:17:00.456 "base_bdevs_list": [ 00:17:00.456 { 00:17:00.456 "name": "pt1", 00:17:00.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.456 "is_configured": true, 00:17:00.456 "data_offset": 2048, 00:17:00.456 "data_size": 63488 00:17:00.456 }, 00:17:00.456 { 00:17:00.456 "name": "pt2", 00:17:00.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.456 "is_configured": true, 00:17:00.456 "data_offset": 2048, 00:17:00.456 "data_size": 63488 00:17:00.456 } 00:17:00.456 ] 00:17:00.456 } 00:17:00.456 } 00:17:00.456 }' 00:17:00.456 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.715 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:00.715 pt2' 00:17:00.715 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.715 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:00.715 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.973 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.973 "name": "pt1", 00:17:00.973 "aliases": [ 00:17:00.973 "00000000-0000-0000-0000-000000000001" 00:17:00.973 ], 00:17:00.973 "product_name": "passthru", 00:17:00.973 "block_size": 512, 00:17:00.973 "num_blocks": 65536, 00:17:00.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.973 "assigned_rate_limits": { 00:17:00.973 "rw_ios_per_sec": 0, 00:17:00.973 "rw_mbytes_per_sec": 0, 00:17:00.973 "r_mbytes_per_sec": 0, 00:17:00.973 "w_mbytes_per_sec": 0 00:17:00.973 }, 00:17:00.973 "claimed": true, 00:17:00.973 "claim_type": "exclusive_write", 00:17:00.973 "zoned": false, 00:17:00.973 "supported_io_types": { 00:17:00.973 "read": true, 00:17:00.973 "write": true, 00:17:00.973 "unmap": true, 00:17:00.973 "flush": true, 00:17:00.973 "reset": true, 00:17:00.973 "nvme_admin": false, 00:17:00.973 "nvme_io": false, 00:17:00.973 "nvme_io_md": false, 00:17:00.973 "write_zeroes": true, 00:17:00.973 "zcopy": true, 00:17:00.973 "get_zone_info": false, 00:17:00.973 "zone_management": false, 00:17:00.973 "zone_append": false, 00:17:00.973 "compare": false, 00:17:00.973 "compare_and_write": false, 00:17:00.973 "abort": true, 00:17:00.973 "seek_hole": false, 00:17:00.973 "seek_data": false, 00:17:00.973 "copy": true, 00:17:00.973 "nvme_iov_md": false 00:17:00.973 }, 00:17:00.973 "memory_domains": [ 00:17:00.973 { 00:17:00.973 "dma_device_id": "system", 00:17:00.973 "dma_device_type": 1 00:17:00.973 }, 00:17:00.973 { 00:17:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.973 "dma_device_type": 2 00:17:00.973 } 00:17:00.973 ], 00:17:00.973 "driver_specific": { 00:17:00.973 "passthru": { 00:17:00.973 "name": "pt1", 00:17:00.973 "base_bdev_name": "malloc1" 00:17:00.973 } 00:17:00.973 } 00:17:00.973 }' 00:17:00.973 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.973 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.974 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.974 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.974 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.974 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.974 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.974 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:01.232 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.491 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.491 "name": "pt2", 00:17:01.491 "aliases": [ 00:17:01.491 "00000000-0000-0000-0000-000000000002" 00:17:01.491 ], 00:17:01.491 "product_name": "passthru", 00:17:01.491 "block_size": 512, 00:17:01.491 "num_blocks": 65536, 00:17:01.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.491 "assigned_rate_limits": { 00:17:01.491 "rw_ios_per_sec": 0, 00:17:01.491 "rw_mbytes_per_sec": 0, 00:17:01.491 "r_mbytes_per_sec": 0, 00:17:01.491 "w_mbytes_per_sec": 0 00:17:01.491 }, 00:17:01.491 "claimed": true, 00:17:01.491 "claim_type": "exclusive_write", 00:17:01.491 "zoned": false, 00:17:01.491 "supported_io_types": { 00:17:01.491 "read": true, 00:17:01.491 "write": true, 00:17:01.491 "unmap": true, 00:17:01.491 "flush": true, 00:17:01.491 "reset": true, 00:17:01.491 "nvme_admin": false, 00:17:01.491 "nvme_io": false, 00:17:01.491 "nvme_io_md": false, 00:17:01.491 "write_zeroes": true, 00:17:01.491 "zcopy": true, 00:17:01.491 "get_zone_info": false, 00:17:01.491 "zone_management": false, 00:17:01.491 "zone_append": false, 00:17:01.491 "compare": false, 00:17:01.491 "compare_and_write": false, 00:17:01.491 "abort": true, 00:17:01.491 "seek_hole": false, 00:17:01.491 "seek_data": false, 00:17:01.491 "copy": true, 00:17:01.491 "nvme_iov_md": false 00:17:01.491 }, 00:17:01.491 "memory_domains": [ 00:17:01.491 { 00:17:01.491 "dma_device_id": "system", 00:17:01.491 "dma_device_type": 1 00:17:01.491 }, 00:17:01.491 { 00:17:01.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.491 "dma_device_type": 2 00:17:01.491 } 00:17:01.491 ], 00:17:01.491 "driver_specific": { 00:17:01.491 "passthru": { 00:17:01.491 "name": "pt2", 00:17:01.491 "base_bdev_name": "malloc2" 00:17:01.491 } 00:17:01.491 } 00:17:01.491 }' 00:17:01.491 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.491 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.491 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.491 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:01.749 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:02.008 [2024-07-25 13:59:51.002168] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' e6ee73da-8561-488b-a2ce-bb0f43bc13f9 '!=' e6ee73da-8561-488b-a2ce-bb0f43bc13f9 ']' 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 122426 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 122426 ']' 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 122426 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122426 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122426' 00:17:02.008 killing process with pid 122426 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 122426 00:17:02.008 [2024-07-25 13:59:51.045722] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.008 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 122426 00:17:02.008 [2024-07-25 13:59:51.046052] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.008 [2024-07-25 13:59:51.046222] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.008 [2024-07-25 13:59:51.046324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:02.266 [2024-07-25 13:59:51.215316] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.640 13:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:03.640 00:17:03.640 real 0m12.659s 00:17:03.640 user 0m22.588s 00:17:03.640 sys 0m1.388s 00:17:03.640 13:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.640 13:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 ************************************ 00:17:03.640 END TEST raid_superblock_test 00:17:03.640 ************************************ 00:17:03.640 13:59:52 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:17:03.640 13:59:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:03.640 13:59:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.640 13:59:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 ************************************ 00:17:03.640 START TEST raid_read_error_test 00:17:03.640 ************************************ 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.Fz5mMsOR3U 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=122811 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 122811 /var/tmp/spdk-raid.sock 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 122811 ']' 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:03.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.640 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 [2024-07-25 13:59:52.477686] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:03.640 [2024-07-25 13:59:52.478183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122811 ] 00:17:03.640 [2024-07-25 13:59:52.650980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.898 [2024-07-25 13:59:52.866998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.155 [2024-07-25 13:59:53.066539] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.756 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.756 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:04.756 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:17:04.756 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.014 BaseBdev1_malloc 00:17:05.014 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:05.272 true 00:17:05.272 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:05.531 [2024-07-25 13:59:54.327175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:05.531 [2024-07-25 13:59:54.327592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.531 [2024-07-25 13:59:54.327800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:05.531 [2024-07-25 13:59:54.327931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.531 [2024-07-25 13:59:54.330640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.531 [2024-07-25 13:59:54.330816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.531 BaseBdev1 00:17:05.531 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:17:05.531 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:05.791 BaseBdev2_malloc 00:17:05.791 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:06.049 true 00:17:06.049 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:06.307 [2024-07-25 13:59:55.119790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:06.307 [2024-07-25 13:59:55.120152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.307 [2024-07-25 13:59:55.120379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.307 [2024-07-25 13:59:55.120545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.307 [2024-07-25 13:59:55.123214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.307 [2024-07-25 13:59:55.123388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.307 BaseBdev2 00:17:06.307 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:06.565 [2024-07-25 13:59:55.359960] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.565 [2024-07-25 13:59:55.362497] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.565 [2024-07-25 13:59:55.362883] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:06.565 [2024-07-25 13:59:55.363018] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:06.565 [2024-07-25 13:59:55.363215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:06.565 [2024-07-25 13:59:55.363690] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:06.565 [2024-07-25 13:59:55.363820] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:06.565 [2024-07-25 13:59:55.364184] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.565 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.823 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.823 "name": "raid_bdev1", 00:17:06.823 "uuid": "7e3c8edb-9e0a-43d9-816c-07992c68626e", 00:17:06.823 "strip_size_kb": 64, 00:17:06.823 "state": "online", 00:17:06.823 "raid_level": "concat", 00:17:06.823 "superblock": true, 00:17:06.823 "num_base_bdevs": 2, 00:17:06.823 "num_base_bdevs_discovered": 2, 00:17:06.823 "num_base_bdevs_operational": 2, 00:17:06.823 "base_bdevs_list": [ 00:17:06.823 { 00:17:06.823 "name": "BaseBdev1", 00:17:06.823 "uuid": "45968dbe-7be6-5aee-b5fc-bf36b95bb182", 00:17:06.823 "is_configured": true, 00:17:06.823 "data_offset": 2048, 00:17:06.823 "data_size": 63488 00:17:06.823 }, 00:17:06.823 { 00:17:06.823 "name": "BaseBdev2", 00:17:06.823 "uuid": "73cd17cf-ec5d-5d75-bddf-a28a37678db7", 00:17:06.823 "is_configured": true, 00:17:06.823 "data_offset": 2048, 00:17:06.823 "data_size": 63488 00:17:06.823 } 00:17:06.823 ] 00:17:06.823 }' 00:17:06.823 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.823 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.389 13:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:07.389 13:59:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:17:07.389 [2024-07-25 13:59:56.409724] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:08.413 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=2 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.670 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.928 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.928 "name": "raid_bdev1", 00:17:08.928 "uuid": "7e3c8edb-9e0a-43d9-816c-07992c68626e", 00:17:08.928 "strip_size_kb": 64, 00:17:08.928 "state": "online", 00:17:08.928 "raid_level": "concat", 00:17:08.928 "superblock": true, 00:17:08.928 "num_base_bdevs": 2, 00:17:08.928 "num_base_bdevs_discovered": 2, 00:17:08.928 "num_base_bdevs_operational": 2, 00:17:08.928 "base_bdevs_list": [ 00:17:08.928 { 00:17:08.928 "name": "BaseBdev1", 00:17:08.928 "uuid": "45968dbe-7be6-5aee-b5fc-bf36b95bb182", 00:17:08.928 "is_configured": true, 00:17:08.928 "data_offset": 2048, 00:17:08.928 "data_size": 63488 00:17:08.928 }, 00:17:08.928 { 00:17:08.928 "name": "BaseBdev2", 00:17:08.928 "uuid": "73cd17cf-ec5d-5d75-bddf-a28a37678db7", 00:17:08.928 "is_configured": true, 00:17:08.928 "data_offset": 2048, 00:17:08.928 "data_size": 63488 00:17:08.928 } 00:17:08.928 ] 00:17:08.928 }' 00:17:08.928 13:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.928 13:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 13:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:09.751 [2024-07-25 13:59:58.770130] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.751 [2024-07-25 13:59:58.770400] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.751 [2024-07-25 13:59:58.773584] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.751 [2024-07-25 13:59:58.773760] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.751 [2024-07-25 13:59:58.773890] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.751 [2024-07-25 13:59:58.774054] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:09.751 0 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 122811 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 122811 ']' 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 122811 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.751 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122811 00:17:10.009 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:10.009 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:10.009 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122811' 00:17:10.009 killing process with pid 122811 00:17:10.009 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 122811 00:17:10.009 [2024-07-25 13:59:58.814311] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.009 13:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 122811 00:17:10.009 [2024-07-25 13:59:58.926708] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.Fz5mMsOR3U 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.42 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.42 != \0\.\0\0 ]] 00:17:11.380 00:17:11.380 real 0m7.727s 00:17:11.380 user 0m11.800s 00:17:11.380 sys 0m0.871s 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.380 14:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 ************************************ 00:17:11.380 END TEST raid_read_error_test 00:17:11.380 ************************************ 00:17:11.380 14:00:00 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:17:11.380 14:00:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:11.380 14:00:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.380 14:00:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 ************************************ 00:17:11.380 START TEST raid_write_error_test 00:17:11.380 ************************************ 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:17:11.380 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.j6PYGLktat 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=123013 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 123013 /var/tmp/spdk-raid.sock 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 123013 ']' 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:11.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.381 14:00:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.381 [2024-07-25 14:00:00.250162] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:11.381 [2024-07-25 14:00:00.250620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123013 ] 00:17:11.381 [2024-07-25 14:00:00.416785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.641 [2024-07-25 14:00:00.631546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.900 [2024-07-25 14:00:00.830058] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.464 14:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.464 14:00:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:12.464 14:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:17:12.464 14:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:12.720 BaseBdev1_malloc 00:17:12.720 14:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:12.975 true 00:17:12.975 14:00:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:13.230 [2024-07-25 14:00:02.106624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:13.230 [2024-07-25 14:00:02.106949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.230 [2024-07-25 14:00:02.107150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:13.230 [2024-07-25 14:00:02.107282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.230 [2024-07-25 14:00:02.109999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.230 [2024-07-25 14:00:02.110179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.230 BaseBdev1 00:17:13.230 14:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:17:13.230 14:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:13.487 BaseBdev2_malloc 00:17:13.487 14:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:13.743 true 00:17:13.744 14:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:14.001 [2024-07-25 14:00:02.958681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:14.001 [2024-07-25 14:00:02.959075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.001 [2024-07-25 14:00:02.959278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.001 [2024-07-25 14:00:02.959415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.001 [2024-07-25 14:00:02.962096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.001 [2024-07-25 14:00:02.962269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.001 BaseBdev2 00:17:14.001 14:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:14.581 [2024-07-25 14:00:03.322886] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.581 [2024-07-25 14:00:03.325349] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.581 [2024-07-25 14:00:03.325744] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:14.581 [2024-07-25 14:00:03.325905] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:14.581 [2024-07-25 14:00:03.326161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:14.581 [2024-07-25 14:00:03.326716] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:14.581 [2024-07-25 14:00:03.326842] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:14.581 [2024-07-25 14:00:03.327227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.582 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.838 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.838 "name": "raid_bdev1", 00:17:14.838 "uuid": "7d035525-f098-46c8-aefb-a93ace1953bf", 00:17:14.838 "strip_size_kb": 64, 00:17:14.838 "state": "online", 00:17:14.838 "raid_level": "concat", 00:17:14.838 "superblock": true, 00:17:14.838 "num_base_bdevs": 2, 00:17:14.838 "num_base_bdevs_discovered": 2, 00:17:14.838 "num_base_bdevs_operational": 2, 00:17:14.838 "base_bdevs_list": [ 00:17:14.838 { 00:17:14.838 "name": "BaseBdev1", 00:17:14.838 "uuid": "03888781-0ae9-5526-a05f-2979bea2eb22", 00:17:14.838 "is_configured": true, 00:17:14.838 "data_offset": 2048, 00:17:14.838 "data_size": 63488 00:17:14.838 }, 00:17:14.838 { 00:17:14.838 "name": "BaseBdev2", 00:17:14.838 "uuid": "ba8435db-06cc-5d5c-9126-c782b35e1024", 00:17:14.838 "is_configured": true, 00:17:14.838 "data_offset": 2048, 00:17:14.838 "data_size": 63488 00:17:14.838 } 00:17:14.838 ] 00:17:14.838 }' 00:17:14.838 14:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.838 14:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.403 14:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:17:15.403 14:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:15.660 [2024-07-25 14:00:04.485414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:16.592 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=2 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.849 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.107 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.107 "name": "raid_bdev1", 00:17:17.107 "uuid": "7d035525-f098-46c8-aefb-a93ace1953bf", 00:17:17.107 "strip_size_kb": 64, 00:17:17.107 "state": "online", 00:17:17.107 "raid_level": "concat", 00:17:17.107 "superblock": true, 00:17:17.107 "num_base_bdevs": 2, 00:17:17.107 "num_base_bdevs_discovered": 2, 00:17:17.107 "num_base_bdevs_operational": 2, 00:17:17.107 "base_bdevs_list": [ 00:17:17.107 { 00:17:17.107 "name": "BaseBdev1", 00:17:17.107 "uuid": "03888781-0ae9-5526-a05f-2979bea2eb22", 00:17:17.107 "is_configured": true, 00:17:17.107 "data_offset": 2048, 00:17:17.107 "data_size": 63488 00:17:17.107 }, 00:17:17.107 { 00:17:17.107 "name": "BaseBdev2", 00:17:17.107 "uuid": "ba8435db-06cc-5d5c-9126-c782b35e1024", 00:17:17.107 "is_configured": true, 00:17:17.107 "data_offset": 2048, 00:17:17.107 "data_size": 63488 00:17:17.107 } 00:17:17.107 ] 00:17:17.107 }' 00:17:17.107 14:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.107 14:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.672 14:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:17.930 [2024-07-25 14:00:06.932098] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.930 [2024-07-25 14:00:06.932432] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.930 [2024-07-25 14:00:06.935612] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.930 [2024-07-25 14:00:06.935778] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.930 [2024-07-25 14:00:06.935858] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.930 [2024-07-25 14:00:06.936015] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:17.930 0 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 123013 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 123013 ']' 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 123013 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123013 00:17:17.930 killing process with pid 123013 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123013' 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 123013 00:17:17.930 14:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 123013 00:17:17.930 [2024-07-25 14:00:06.975838] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.188 [2024-07-25 14:00:07.087861] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.j6PYGLktat 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:17:19.561 ************************************ 00:17:19.561 END TEST raid_write_error_test 00:17:19.561 ************************************ 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:17:19.561 00:17:19.561 real 0m8.114s 00:17:19.561 user 0m12.476s 00:17:19.561 sys 0m0.922s 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.561 14:00:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.561 14:00:08 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:17:19.561 14:00:08 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:19.561 14:00:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:19.561 14:00:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.561 14:00:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.561 ************************************ 00:17:19.561 START TEST raid_state_function_test 00:17:19.561 ************************************ 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=123215 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123215' 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:19.561 Process raid pid: 123215 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 123215 /var/tmp/spdk-raid.sock 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 123215 ']' 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.561 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.562 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.562 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.562 [2024-07-25 14:00:08.410169] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:19.562 [2024-07-25 14:00:08.410552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.562 [2024-07-25 14:00:08.568318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.819 [2024-07-25 14:00:08.789346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.077 [2024-07-25 14:00:08.992465] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.354 14:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.354 14:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:20.354 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:20.610 [2024-07-25 14:00:09.651514] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.610 [2024-07-25 14:00:09.651904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.610 [2024-07-25 14:00:09.652035] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.610 [2024-07-25 14:00:09.652108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.867 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.124 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.124 "name": "Existed_Raid", 00:17:21.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.125 "strip_size_kb": 0, 00:17:21.125 "state": "configuring", 00:17:21.125 "raid_level": "raid1", 00:17:21.125 "superblock": false, 00:17:21.125 "num_base_bdevs": 2, 00:17:21.125 "num_base_bdevs_discovered": 0, 00:17:21.125 "num_base_bdevs_operational": 2, 00:17:21.125 "base_bdevs_list": [ 00:17:21.125 { 00:17:21.125 "name": "BaseBdev1", 00:17:21.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.125 "is_configured": false, 00:17:21.125 "data_offset": 0, 00:17:21.125 "data_size": 0 00:17:21.125 }, 00:17:21.125 { 00:17:21.125 "name": "BaseBdev2", 00:17:21.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.125 "is_configured": false, 00:17:21.125 "data_offset": 0, 00:17:21.125 "data_size": 0 00:17:21.125 } 00:17:21.125 ] 00:17:21.125 }' 00:17:21.125 14:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.125 14:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.689 14:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:21.947 [2024-07-25 14:00:10.907644] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.947 [2024-07-25 14:00:10.907968] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:17:21.947 14:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:22.204 [2024-07-25 14:00:11.143724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.204 [2024-07-25 14:00:11.143818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.204 [2024-07-25 14:00:11.143832] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.204 [2024-07-25 14:00:11.143861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.204 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:22.461 [2024-07-25 14:00:11.423651] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.461 BaseBdev1 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:22.461 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:22.462 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.746 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.015 [ 00:17:23.015 { 00:17:23.015 "name": "BaseBdev1", 00:17:23.015 "aliases": [ 00:17:23.015 "668f70cf-0fa0-4e08-aa1e-2fee644537d2" 00:17:23.015 ], 00:17:23.015 "product_name": "Malloc disk", 00:17:23.015 "block_size": 512, 00:17:23.016 "num_blocks": 65536, 00:17:23.016 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:23.016 "assigned_rate_limits": { 00:17:23.016 "rw_ios_per_sec": 0, 00:17:23.016 "rw_mbytes_per_sec": 0, 00:17:23.016 "r_mbytes_per_sec": 0, 00:17:23.016 "w_mbytes_per_sec": 0 00:17:23.016 }, 00:17:23.016 "claimed": true, 00:17:23.016 "claim_type": "exclusive_write", 00:17:23.016 "zoned": false, 00:17:23.016 "supported_io_types": { 00:17:23.016 "read": true, 00:17:23.016 "write": true, 00:17:23.016 "unmap": true, 00:17:23.016 "flush": true, 00:17:23.016 "reset": true, 00:17:23.016 "nvme_admin": false, 00:17:23.016 "nvme_io": false, 00:17:23.016 "nvme_io_md": false, 00:17:23.016 "write_zeroes": true, 00:17:23.016 "zcopy": true, 00:17:23.016 "get_zone_info": false, 00:17:23.016 "zone_management": false, 00:17:23.016 "zone_append": false, 00:17:23.016 "compare": false, 00:17:23.016 "compare_and_write": false, 00:17:23.016 "abort": true, 00:17:23.016 "seek_hole": false, 00:17:23.016 "seek_data": false, 00:17:23.016 "copy": true, 00:17:23.016 "nvme_iov_md": false 00:17:23.016 }, 00:17:23.016 "memory_domains": [ 00:17:23.016 { 00:17:23.016 "dma_device_id": "system", 00:17:23.016 "dma_device_type": 1 00:17:23.016 }, 00:17:23.016 { 00:17:23.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.016 "dma_device_type": 2 00:17:23.016 } 00:17:23.016 ], 00:17:23.016 "driver_specific": {} 00:17:23.016 } 00:17:23.016 ] 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.016 14:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.274 14:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.274 "name": "Existed_Raid", 00:17:23.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.274 "strip_size_kb": 0, 00:17:23.274 "state": "configuring", 00:17:23.274 "raid_level": "raid1", 00:17:23.274 "superblock": false, 00:17:23.274 "num_base_bdevs": 2, 00:17:23.274 "num_base_bdevs_discovered": 1, 00:17:23.274 "num_base_bdevs_operational": 2, 00:17:23.274 "base_bdevs_list": [ 00:17:23.274 { 00:17:23.274 "name": "BaseBdev1", 00:17:23.274 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:23.274 "is_configured": true, 00:17:23.274 "data_offset": 0, 00:17:23.274 "data_size": 65536 00:17:23.274 }, 00:17:23.274 { 00:17:23.274 "name": "BaseBdev2", 00:17:23.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.274 "is_configured": false, 00:17:23.274 "data_offset": 0, 00:17:23.274 "data_size": 0 00:17:23.274 } 00:17:23.274 ] 00:17:23.274 }' 00:17:23.274 14:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.274 14:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.880 14:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.137 [2024-07-25 14:00:13.040098] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.137 [2024-07-25 14:00:13.040182] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:17:24.137 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:24.397 [2024-07-25 14:00:13.328186] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.397 [2024-07-25 14:00:13.330420] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.397 [2024-07-25 14:00:13.330485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.397 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.653 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.653 "name": "Existed_Raid", 00:17:24.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.653 "strip_size_kb": 0, 00:17:24.653 "state": "configuring", 00:17:24.653 "raid_level": "raid1", 00:17:24.653 "superblock": false, 00:17:24.653 "num_base_bdevs": 2, 00:17:24.653 "num_base_bdevs_discovered": 1, 00:17:24.653 "num_base_bdevs_operational": 2, 00:17:24.653 "base_bdevs_list": [ 00:17:24.653 { 00:17:24.653 "name": "BaseBdev1", 00:17:24.653 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:24.653 "is_configured": true, 00:17:24.653 "data_offset": 0, 00:17:24.653 "data_size": 65536 00:17:24.653 }, 00:17:24.653 { 00:17:24.653 "name": "BaseBdev2", 00:17:24.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.653 "is_configured": false, 00:17:24.653 "data_offset": 0, 00:17:24.653 "data_size": 0 00:17:24.653 } 00:17:24.653 ] 00:17:24.653 }' 00:17:24.653 14:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.653 14:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.217 14:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.783 [2024-07-25 14:00:14.539565] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.783 [2024-07-25 14:00:14.539646] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:25.783 [2024-07-25 14:00:14.539659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:25.783 [2024-07-25 14:00:14.539802] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:25.783 [2024-07-25 14:00:14.540195] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:25.783 [2024-07-25 14:00:14.540221] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:17:25.783 [2024-07-25 14:00:14.540512] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.783 BaseBdev2 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.783 14:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.041 [ 00:17:26.041 { 00:17:26.041 "name": "BaseBdev2", 00:17:26.041 "aliases": [ 00:17:26.041 "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd" 00:17:26.041 ], 00:17:26.041 "product_name": "Malloc disk", 00:17:26.041 "block_size": 512, 00:17:26.041 "num_blocks": 65536, 00:17:26.041 "uuid": "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd", 00:17:26.041 "assigned_rate_limits": { 00:17:26.041 "rw_ios_per_sec": 0, 00:17:26.041 "rw_mbytes_per_sec": 0, 00:17:26.041 "r_mbytes_per_sec": 0, 00:17:26.041 "w_mbytes_per_sec": 0 00:17:26.041 }, 00:17:26.041 "claimed": true, 00:17:26.041 "claim_type": "exclusive_write", 00:17:26.041 "zoned": false, 00:17:26.041 "supported_io_types": { 00:17:26.041 "read": true, 00:17:26.041 "write": true, 00:17:26.041 "unmap": true, 00:17:26.041 "flush": true, 00:17:26.041 "reset": true, 00:17:26.041 "nvme_admin": false, 00:17:26.041 "nvme_io": false, 00:17:26.041 "nvme_io_md": false, 00:17:26.041 "write_zeroes": true, 00:17:26.041 "zcopy": true, 00:17:26.041 "get_zone_info": false, 00:17:26.041 "zone_management": false, 00:17:26.041 "zone_append": false, 00:17:26.041 "compare": false, 00:17:26.041 "compare_and_write": false, 00:17:26.041 "abort": true, 00:17:26.041 "seek_hole": false, 00:17:26.041 "seek_data": false, 00:17:26.041 "copy": true, 00:17:26.041 "nvme_iov_md": false 00:17:26.041 }, 00:17:26.041 "memory_domains": [ 00:17:26.041 { 00:17:26.041 "dma_device_id": "system", 00:17:26.041 "dma_device_type": 1 00:17:26.041 }, 00:17:26.041 { 00:17:26.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.041 "dma_device_type": 2 00:17:26.041 } 00:17:26.041 ], 00:17:26.041 "driver_specific": {} 00:17:26.041 } 00:17:26.041 ] 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.041 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.607 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.607 "name": "Existed_Raid", 00:17:26.607 "uuid": "f820bafd-b77b-4ce5-a0b9-e9362362b374", 00:17:26.607 "strip_size_kb": 0, 00:17:26.607 "state": "online", 00:17:26.607 "raid_level": "raid1", 00:17:26.607 "superblock": false, 00:17:26.607 "num_base_bdevs": 2, 00:17:26.607 "num_base_bdevs_discovered": 2, 00:17:26.607 "num_base_bdevs_operational": 2, 00:17:26.607 "base_bdevs_list": [ 00:17:26.607 { 00:17:26.607 "name": "BaseBdev1", 00:17:26.607 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:26.607 "is_configured": true, 00:17:26.607 "data_offset": 0, 00:17:26.607 "data_size": 65536 00:17:26.607 }, 00:17:26.607 { 00:17:26.607 "name": "BaseBdev2", 00:17:26.607 "uuid": "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd", 00:17:26.607 "is_configured": true, 00:17:26.607 "data_offset": 0, 00:17:26.607 "data_size": 65536 00:17:26.607 } 00:17:26.607 ] 00:17:26.607 }' 00:17:26.607 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.607 14:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:27.171 14:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:27.171 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:27.171 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:27.434 [2024-07-25 14:00:16.284358] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:27.434 "name": "Existed_Raid", 00:17:27.434 "aliases": [ 00:17:27.434 "f820bafd-b77b-4ce5-a0b9-e9362362b374" 00:17:27.434 ], 00:17:27.434 "product_name": "Raid Volume", 00:17:27.434 "block_size": 512, 00:17:27.434 "num_blocks": 65536, 00:17:27.434 "uuid": "f820bafd-b77b-4ce5-a0b9-e9362362b374", 00:17:27.434 "assigned_rate_limits": { 00:17:27.434 "rw_ios_per_sec": 0, 00:17:27.434 "rw_mbytes_per_sec": 0, 00:17:27.434 "r_mbytes_per_sec": 0, 00:17:27.434 "w_mbytes_per_sec": 0 00:17:27.434 }, 00:17:27.434 "claimed": false, 00:17:27.434 "zoned": false, 00:17:27.434 "supported_io_types": { 00:17:27.434 "read": true, 00:17:27.434 "write": true, 00:17:27.434 "unmap": false, 00:17:27.434 "flush": false, 00:17:27.434 "reset": true, 00:17:27.434 "nvme_admin": false, 00:17:27.434 "nvme_io": false, 00:17:27.434 "nvme_io_md": false, 00:17:27.434 "write_zeroes": true, 00:17:27.434 "zcopy": false, 00:17:27.434 "get_zone_info": false, 00:17:27.434 "zone_management": false, 00:17:27.434 "zone_append": false, 00:17:27.434 "compare": false, 00:17:27.434 "compare_and_write": false, 00:17:27.434 "abort": false, 00:17:27.434 "seek_hole": false, 00:17:27.434 "seek_data": false, 00:17:27.434 "copy": false, 00:17:27.434 "nvme_iov_md": false 00:17:27.434 }, 00:17:27.434 "memory_domains": [ 00:17:27.434 { 00:17:27.434 "dma_device_id": "system", 00:17:27.434 "dma_device_type": 1 00:17:27.434 }, 00:17:27.434 { 00:17:27.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.434 "dma_device_type": 2 00:17:27.434 }, 00:17:27.434 { 00:17:27.434 "dma_device_id": "system", 00:17:27.434 "dma_device_type": 1 00:17:27.434 }, 00:17:27.434 { 00:17:27.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.434 "dma_device_type": 2 00:17:27.434 } 00:17:27.434 ], 00:17:27.434 "driver_specific": { 00:17:27.434 "raid": { 00:17:27.434 "uuid": "f820bafd-b77b-4ce5-a0b9-e9362362b374", 00:17:27.434 "strip_size_kb": 0, 00:17:27.434 "state": "online", 00:17:27.434 "raid_level": "raid1", 00:17:27.434 "superblock": false, 00:17:27.434 "num_base_bdevs": 2, 00:17:27.434 "num_base_bdevs_discovered": 2, 00:17:27.434 "num_base_bdevs_operational": 2, 00:17:27.434 "base_bdevs_list": [ 00:17:27.434 { 00:17:27.434 "name": "BaseBdev1", 00:17:27.434 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:27.434 "is_configured": true, 00:17:27.434 "data_offset": 0, 00:17:27.434 "data_size": 65536 00:17:27.434 }, 00:17:27.434 { 00:17:27.434 "name": "BaseBdev2", 00:17:27.434 "uuid": "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd", 00:17:27.434 "is_configured": true, 00:17:27.434 "data_offset": 0, 00:17:27.434 "data_size": 65536 00:17:27.434 } 00:17:27.434 ] 00:17:27.434 } 00:17:27.434 } 00:17:27.434 }' 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:27.434 BaseBdev2' 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:27.434 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.692 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.692 "name": "BaseBdev1", 00:17:27.692 "aliases": [ 00:17:27.692 "668f70cf-0fa0-4e08-aa1e-2fee644537d2" 00:17:27.692 ], 00:17:27.692 "product_name": "Malloc disk", 00:17:27.692 "block_size": 512, 00:17:27.692 "num_blocks": 65536, 00:17:27.692 "uuid": "668f70cf-0fa0-4e08-aa1e-2fee644537d2", 00:17:27.692 "assigned_rate_limits": { 00:17:27.692 "rw_ios_per_sec": 0, 00:17:27.692 "rw_mbytes_per_sec": 0, 00:17:27.692 "r_mbytes_per_sec": 0, 00:17:27.692 "w_mbytes_per_sec": 0 00:17:27.692 }, 00:17:27.692 "claimed": true, 00:17:27.692 "claim_type": "exclusive_write", 00:17:27.692 "zoned": false, 00:17:27.692 "supported_io_types": { 00:17:27.692 "read": true, 00:17:27.692 "write": true, 00:17:27.692 "unmap": true, 00:17:27.692 "flush": true, 00:17:27.692 "reset": true, 00:17:27.692 "nvme_admin": false, 00:17:27.692 "nvme_io": false, 00:17:27.692 "nvme_io_md": false, 00:17:27.692 "write_zeroes": true, 00:17:27.692 "zcopy": true, 00:17:27.692 "get_zone_info": false, 00:17:27.692 "zone_management": false, 00:17:27.692 "zone_append": false, 00:17:27.692 "compare": false, 00:17:27.692 "compare_and_write": false, 00:17:27.692 "abort": true, 00:17:27.692 "seek_hole": false, 00:17:27.692 "seek_data": false, 00:17:27.692 "copy": true, 00:17:27.692 "nvme_iov_md": false 00:17:27.692 }, 00:17:27.692 "memory_domains": [ 00:17:27.692 { 00:17:27.692 "dma_device_id": "system", 00:17:27.692 "dma_device_type": 1 00:17:27.692 }, 00:17:27.692 { 00:17:27.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.692 "dma_device_type": 2 00:17:27.692 } 00:17:27.692 ], 00:17:27.692 "driver_specific": {} 00:17:27.692 }' 00:17:27.692 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.692 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.692 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.692 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.950 14:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.270 "name": "BaseBdev2", 00:17:28.270 "aliases": [ 00:17:28.270 "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd" 00:17:28.270 ], 00:17:28.270 "product_name": "Malloc disk", 00:17:28.270 "block_size": 512, 00:17:28.270 "num_blocks": 65536, 00:17:28.270 "uuid": "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd", 00:17:28.270 "assigned_rate_limits": { 00:17:28.270 "rw_ios_per_sec": 0, 00:17:28.270 "rw_mbytes_per_sec": 0, 00:17:28.270 "r_mbytes_per_sec": 0, 00:17:28.270 "w_mbytes_per_sec": 0 00:17:28.270 }, 00:17:28.270 "claimed": true, 00:17:28.270 "claim_type": "exclusive_write", 00:17:28.270 "zoned": false, 00:17:28.270 "supported_io_types": { 00:17:28.270 "read": true, 00:17:28.270 "write": true, 00:17:28.270 "unmap": true, 00:17:28.270 "flush": true, 00:17:28.270 "reset": true, 00:17:28.270 "nvme_admin": false, 00:17:28.270 "nvme_io": false, 00:17:28.270 "nvme_io_md": false, 00:17:28.270 "write_zeroes": true, 00:17:28.270 "zcopy": true, 00:17:28.270 "get_zone_info": false, 00:17:28.270 "zone_management": false, 00:17:28.270 "zone_append": false, 00:17:28.270 "compare": false, 00:17:28.270 "compare_and_write": false, 00:17:28.270 "abort": true, 00:17:28.270 "seek_hole": false, 00:17:28.270 "seek_data": false, 00:17:28.270 "copy": true, 00:17:28.270 "nvme_iov_md": false 00:17:28.270 }, 00:17:28.270 "memory_domains": [ 00:17:28.270 { 00:17:28.270 "dma_device_id": "system", 00:17:28.270 "dma_device_type": 1 00:17:28.270 }, 00:17:28.270 { 00:17:28.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.270 "dma_device_type": 2 00:17:28.270 } 00:17:28.270 ], 00:17:28.270 "driver_specific": {} 00:17:28.270 }' 00:17:28.270 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.540 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.798 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.798 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.798 14:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.056 [2024-07-25 14:00:17.952559] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.056 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.314 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.314 "name": "Existed_Raid", 00:17:29.314 "uuid": "f820bafd-b77b-4ce5-a0b9-e9362362b374", 00:17:29.314 "strip_size_kb": 0, 00:17:29.314 "state": "online", 00:17:29.314 "raid_level": "raid1", 00:17:29.314 "superblock": false, 00:17:29.314 "num_base_bdevs": 2, 00:17:29.314 "num_base_bdevs_discovered": 1, 00:17:29.314 "num_base_bdevs_operational": 1, 00:17:29.314 "base_bdevs_list": [ 00:17:29.314 { 00:17:29.314 "name": null, 00:17:29.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.314 "is_configured": false, 00:17:29.314 "data_offset": 0, 00:17:29.314 "data_size": 65536 00:17:29.314 }, 00:17:29.314 { 00:17:29.314 "name": "BaseBdev2", 00:17:29.314 "uuid": "e168fdd3-202a-4cdd-99a6-1f15e7abf5fd", 00:17:29.314 "is_configured": true, 00:17:29.314 "data_offset": 0, 00:17:29.314 "data_size": 65536 00:17:29.314 } 00:17:29.314 ] 00:17:29.314 }' 00:17:29.314 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.314 14:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.247 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:30.247 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.247 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.247 14:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:30.247 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:30.247 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.247 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:30.507 [2024-07-25 14:00:19.493747] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.507 [2024-07-25 14:00:19.493899] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.765 [2024-07-25 14:00:19.578709] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.765 [2024-07-25 14:00:19.578792] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.765 [2024-07-25 14:00:19.578805] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:17:30.765 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.765 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.765 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.765 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 123215 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 123215 ']' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 123215 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123215 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.024 killing process with pid 123215 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123215' 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 123215 00:17:31.024 14:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 123215 00:17:31.024 [2024-07-25 14:00:19.875388] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.024 [2024-07-25 14:00:19.875514] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.397 ************************************ 00:17:32.397 END TEST raid_state_function_test 00:17:32.397 ************************************ 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:32.397 00:17:32.397 real 0m12.675s 00:17:32.397 user 0m22.403s 00:17:32.397 sys 0m1.461s 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 14:00:21 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:17:32.397 14:00:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:32.397 14:00:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.397 14:00:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 ************************************ 00:17:32.397 START TEST raid_state_function_test_sb 00:17:32.397 ************************************ 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123606 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123606' 00:17:32.397 Process raid pid: 123606 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123606 /var/tmp/spdk-raid.sock 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 123606 ']' 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:32.397 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:32.398 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:32.398 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.398 14:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.398 [2024-07-25 14:00:21.145082] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:32.398 [2024-07-25 14:00:21.145312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.398 [2024-07-25 14:00:21.316385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.654 [2024-07-25 14:00:21.535508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.912 [2024-07-25 14:00:21.738988] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.170 14:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.170 14:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:33.170 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:33.428 [2024-07-25 14:00:22.407072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.428 [2024-07-25 14:00:22.407217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.428 [2024-07-25 14:00:22.407234] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.428 [2024-07-25 14:00:22.407267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.428 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.994 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.994 "name": "Existed_Raid", 00:17:33.994 "uuid": "07a9c68a-eeac-48cc-9d17-f31c3f8e4c06", 00:17:33.994 "strip_size_kb": 0, 00:17:33.994 "state": "configuring", 00:17:33.994 "raid_level": "raid1", 00:17:33.994 "superblock": true, 00:17:33.994 "num_base_bdevs": 2, 00:17:33.994 "num_base_bdevs_discovered": 0, 00:17:33.994 "num_base_bdevs_operational": 2, 00:17:33.994 "base_bdevs_list": [ 00:17:33.994 { 00:17:33.994 "name": "BaseBdev1", 00:17:33.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.994 "is_configured": false, 00:17:33.994 "data_offset": 0, 00:17:33.994 "data_size": 0 00:17:33.994 }, 00:17:33.994 { 00:17:33.994 "name": "BaseBdev2", 00:17:33.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.994 "is_configured": false, 00:17:33.994 "data_offset": 0, 00:17:33.994 "data_size": 0 00:17:33.994 } 00:17:33.994 ] 00:17:33.994 }' 00:17:33.994 14:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.994 14:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 14:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.822 [2024-07-25 14:00:23.671158] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.822 [2024-07-25 14:00:23.671216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:17:34.822 14:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:35.080 [2024-07-25 14:00:23.919239] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.080 [2024-07-25 14:00:23.919334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.080 [2024-07-25 14:00:23.919351] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.080 [2024-07-25 14:00:23.919380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.080 14:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.339 [2024-07-25 14:00:24.219438] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.339 BaseBdev1 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.339 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.597 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.857 [ 00:17:35.857 { 00:17:35.857 "name": "BaseBdev1", 00:17:35.857 "aliases": [ 00:17:35.857 "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb" 00:17:35.857 ], 00:17:35.857 "product_name": "Malloc disk", 00:17:35.857 "block_size": 512, 00:17:35.857 "num_blocks": 65536, 00:17:35.857 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:35.857 "assigned_rate_limits": { 00:17:35.857 "rw_ios_per_sec": 0, 00:17:35.857 "rw_mbytes_per_sec": 0, 00:17:35.857 "r_mbytes_per_sec": 0, 00:17:35.857 "w_mbytes_per_sec": 0 00:17:35.857 }, 00:17:35.857 "claimed": true, 00:17:35.857 "claim_type": "exclusive_write", 00:17:35.857 "zoned": false, 00:17:35.857 "supported_io_types": { 00:17:35.857 "read": true, 00:17:35.857 "write": true, 00:17:35.857 "unmap": true, 00:17:35.857 "flush": true, 00:17:35.857 "reset": true, 00:17:35.857 "nvme_admin": false, 00:17:35.857 "nvme_io": false, 00:17:35.857 "nvme_io_md": false, 00:17:35.857 "write_zeroes": true, 00:17:35.857 "zcopy": true, 00:17:35.857 "get_zone_info": false, 00:17:35.857 "zone_management": false, 00:17:35.857 "zone_append": false, 00:17:35.857 "compare": false, 00:17:35.857 "compare_and_write": false, 00:17:35.857 "abort": true, 00:17:35.857 "seek_hole": false, 00:17:35.857 "seek_data": false, 00:17:35.857 "copy": true, 00:17:35.857 "nvme_iov_md": false 00:17:35.857 }, 00:17:35.857 "memory_domains": [ 00:17:35.857 { 00:17:35.857 "dma_device_id": "system", 00:17:35.857 "dma_device_type": 1 00:17:35.857 }, 00:17:35.857 { 00:17:35.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.857 "dma_device_type": 2 00:17:35.857 } 00:17:35.857 ], 00:17:35.857 "driver_specific": {} 00:17:35.857 } 00:17:35.857 ] 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.857 14:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.115 14:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.115 "name": "Existed_Raid", 00:17:36.115 "uuid": "0af32471-1140-48d1-899b-28ec4ed0d7d4", 00:17:36.115 "strip_size_kb": 0, 00:17:36.115 "state": "configuring", 00:17:36.115 "raid_level": "raid1", 00:17:36.115 "superblock": true, 00:17:36.115 "num_base_bdevs": 2, 00:17:36.115 "num_base_bdevs_discovered": 1, 00:17:36.115 "num_base_bdevs_operational": 2, 00:17:36.115 "base_bdevs_list": [ 00:17:36.115 { 00:17:36.115 "name": "BaseBdev1", 00:17:36.115 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:36.115 "is_configured": true, 00:17:36.115 "data_offset": 2048, 00:17:36.115 "data_size": 63488 00:17:36.115 }, 00:17:36.115 { 00:17:36.115 "name": "BaseBdev2", 00:17:36.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.116 "is_configured": false, 00:17:36.116 "data_offset": 0, 00:17:36.116 "data_size": 0 00:17:36.116 } 00:17:36.116 ] 00:17:36.116 }' 00:17:36.116 14:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.116 14:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.682 14:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.939 [2024-07-25 14:00:25.923916] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.939 [2024-07-25 14:00:25.924005] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:17:36.939 14:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:37.197 [2024-07-25 14:00:26.164004] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.197 [2024-07-25 14:00:26.166287] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.197 [2024-07-25 14:00:26.166361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.197 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.762 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.762 "name": "Existed_Raid", 00:17:37.762 "uuid": "74d187e9-4f18-45f1-a547-ac8636fddac8", 00:17:37.762 "strip_size_kb": 0, 00:17:37.762 "state": "configuring", 00:17:37.762 "raid_level": "raid1", 00:17:37.762 "superblock": true, 00:17:37.762 "num_base_bdevs": 2, 00:17:37.762 "num_base_bdevs_discovered": 1, 00:17:37.762 "num_base_bdevs_operational": 2, 00:17:37.762 "base_bdevs_list": [ 00:17:37.762 { 00:17:37.762 "name": "BaseBdev1", 00:17:37.762 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:37.762 "is_configured": true, 00:17:37.762 "data_offset": 2048, 00:17:37.762 "data_size": 63488 00:17:37.762 }, 00:17:37.762 { 00:17:37.762 "name": "BaseBdev2", 00:17:37.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.762 "is_configured": false, 00:17:37.762 "data_offset": 0, 00:17:37.762 "data_size": 0 00:17:37.762 } 00:17:37.762 ] 00:17:37.762 }' 00:17:37.762 14:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.762 14:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.327 14:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:38.585 [2024-07-25 14:00:27.505298] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.585 [2024-07-25 14:00:27.505584] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:38.585 [2024-07-25 14:00:27.505602] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:38.585 [2024-07-25 14:00:27.505735] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:38.585 [2024-07-25 14:00:27.506135] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:38.585 [2024-07-25 14:00:27.506161] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:17:38.585 [2024-07-25 14:00:27.506316] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.585 BaseBdev2 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:38.585 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.843 14:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.410 [ 00:17:39.410 { 00:17:39.410 "name": "BaseBdev2", 00:17:39.410 "aliases": [ 00:17:39.410 "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c" 00:17:39.410 ], 00:17:39.410 "product_name": "Malloc disk", 00:17:39.410 "block_size": 512, 00:17:39.410 "num_blocks": 65536, 00:17:39.410 "uuid": "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c", 00:17:39.410 "assigned_rate_limits": { 00:17:39.410 "rw_ios_per_sec": 0, 00:17:39.410 "rw_mbytes_per_sec": 0, 00:17:39.410 "r_mbytes_per_sec": 0, 00:17:39.410 "w_mbytes_per_sec": 0 00:17:39.410 }, 00:17:39.410 "claimed": true, 00:17:39.410 "claim_type": "exclusive_write", 00:17:39.410 "zoned": false, 00:17:39.410 "supported_io_types": { 00:17:39.410 "read": true, 00:17:39.410 "write": true, 00:17:39.410 "unmap": true, 00:17:39.410 "flush": true, 00:17:39.410 "reset": true, 00:17:39.410 "nvme_admin": false, 00:17:39.410 "nvme_io": false, 00:17:39.410 "nvme_io_md": false, 00:17:39.410 "write_zeroes": true, 00:17:39.410 "zcopy": true, 00:17:39.410 "get_zone_info": false, 00:17:39.410 "zone_management": false, 00:17:39.410 "zone_append": false, 00:17:39.410 "compare": false, 00:17:39.410 "compare_and_write": false, 00:17:39.410 "abort": true, 00:17:39.410 "seek_hole": false, 00:17:39.410 "seek_data": false, 00:17:39.410 "copy": true, 00:17:39.410 "nvme_iov_md": false 00:17:39.410 }, 00:17:39.410 "memory_domains": [ 00:17:39.410 { 00:17:39.410 "dma_device_id": "system", 00:17:39.410 "dma_device_type": 1 00:17:39.410 }, 00:17:39.410 { 00:17:39.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.410 "dma_device_type": 2 00:17:39.410 } 00:17:39.410 ], 00:17:39.410 "driver_specific": {} 00:17:39.410 } 00:17:39.410 ] 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.410 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.668 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.668 "name": "Existed_Raid", 00:17:39.668 "uuid": "74d187e9-4f18-45f1-a547-ac8636fddac8", 00:17:39.668 "strip_size_kb": 0, 00:17:39.668 "state": "online", 00:17:39.668 "raid_level": "raid1", 00:17:39.668 "superblock": true, 00:17:39.668 "num_base_bdevs": 2, 00:17:39.668 "num_base_bdevs_discovered": 2, 00:17:39.668 "num_base_bdevs_operational": 2, 00:17:39.668 "base_bdevs_list": [ 00:17:39.668 { 00:17:39.668 "name": "BaseBdev1", 00:17:39.668 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:39.668 "is_configured": true, 00:17:39.668 "data_offset": 2048, 00:17:39.668 "data_size": 63488 00:17:39.668 }, 00:17:39.668 { 00:17:39.668 "name": "BaseBdev2", 00:17:39.668 "uuid": "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c", 00:17:39.668 "is_configured": true, 00:17:39.668 "data_offset": 2048, 00:17:39.668 "data_size": 63488 00:17:39.668 } 00:17:39.668 ] 00:17:39.668 }' 00:17:39.668 14:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.668 14:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:40.243 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:40.508 [2024-07-25 14:00:29.422165] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.508 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:40.508 "name": "Existed_Raid", 00:17:40.508 "aliases": [ 00:17:40.508 "74d187e9-4f18-45f1-a547-ac8636fddac8" 00:17:40.508 ], 00:17:40.508 "product_name": "Raid Volume", 00:17:40.508 "block_size": 512, 00:17:40.508 "num_blocks": 63488, 00:17:40.509 "uuid": "74d187e9-4f18-45f1-a547-ac8636fddac8", 00:17:40.509 "assigned_rate_limits": { 00:17:40.509 "rw_ios_per_sec": 0, 00:17:40.509 "rw_mbytes_per_sec": 0, 00:17:40.509 "r_mbytes_per_sec": 0, 00:17:40.509 "w_mbytes_per_sec": 0 00:17:40.509 }, 00:17:40.509 "claimed": false, 00:17:40.509 "zoned": false, 00:17:40.509 "supported_io_types": { 00:17:40.509 "read": true, 00:17:40.509 "write": true, 00:17:40.509 "unmap": false, 00:17:40.509 "flush": false, 00:17:40.509 "reset": true, 00:17:40.509 "nvme_admin": false, 00:17:40.509 "nvme_io": false, 00:17:40.509 "nvme_io_md": false, 00:17:40.509 "write_zeroes": true, 00:17:40.509 "zcopy": false, 00:17:40.509 "get_zone_info": false, 00:17:40.509 "zone_management": false, 00:17:40.509 "zone_append": false, 00:17:40.509 "compare": false, 00:17:40.509 "compare_and_write": false, 00:17:40.509 "abort": false, 00:17:40.509 "seek_hole": false, 00:17:40.509 "seek_data": false, 00:17:40.509 "copy": false, 00:17:40.509 "nvme_iov_md": false 00:17:40.509 }, 00:17:40.509 "memory_domains": [ 00:17:40.509 { 00:17:40.509 "dma_device_id": "system", 00:17:40.509 "dma_device_type": 1 00:17:40.509 }, 00:17:40.509 { 00:17:40.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.509 "dma_device_type": 2 00:17:40.509 }, 00:17:40.509 { 00:17:40.509 "dma_device_id": "system", 00:17:40.509 "dma_device_type": 1 00:17:40.509 }, 00:17:40.509 { 00:17:40.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.509 "dma_device_type": 2 00:17:40.509 } 00:17:40.509 ], 00:17:40.509 "driver_specific": { 00:17:40.509 "raid": { 00:17:40.509 "uuid": "74d187e9-4f18-45f1-a547-ac8636fddac8", 00:17:40.509 "strip_size_kb": 0, 00:17:40.509 "state": "online", 00:17:40.509 "raid_level": "raid1", 00:17:40.509 "superblock": true, 00:17:40.509 "num_base_bdevs": 2, 00:17:40.509 "num_base_bdevs_discovered": 2, 00:17:40.509 "num_base_bdevs_operational": 2, 00:17:40.509 "base_bdevs_list": [ 00:17:40.509 { 00:17:40.509 "name": "BaseBdev1", 00:17:40.509 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:40.509 "is_configured": true, 00:17:40.509 "data_offset": 2048, 00:17:40.509 "data_size": 63488 00:17:40.509 }, 00:17:40.509 { 00:17:40.509 "name": "BaseBdev2", 00:17:40.509 "uuid": "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c", 00:17:40.509 "is_configured": true, 00:17:40.509 "data_offset": 2048, 00:17:40.509 "data_size": 63488 00:17:40.509 } 00:17:40.509 ] 00:17:40.509 } 00:17:40.509 } 00:17:40.509 }' 00:17:40.509 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.509 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:40.509 BaseBdev2' 00:17:40.509 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.509 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:40.509 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.075 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.075 "name": "BaseBdev1", 00:17:41.075 "aliases": [ 00:17:41.075 "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb" 00:17:41.075 ], 00:17:41.075 "product_name": "Malloc disk", 00:17:41.075 "block_size": 512, 00:17:41.075 "num_blocks": 65536, 00:17:41.075 "uuid": "d412d61c-f0cc-429a-9fa7-616ca1b9a5fb", 00:17:41.075 "assigned_rate_limits": { 00:17:41.075 "rw_ios_per_sec": 0, 00:17:41.075 "rw_mbytes_per_sec": 0, 00:17:41.075 "r_mbytes_per_sec": 0, 00:17:41.075 "w_mbytes_per_sec": 0 00:17:41.075 }, 00:17:41.075 "claimed": true, 00:17:41.075 "claim_type": "exclusive_write", 00:17:41.075 "zoned": false, 00:17:41.075 "supported_io_types": { 00:17:41.075 "read": true, 00:17:41.075 "write": true, 00:17:41.075 "unmap": true, 00:17:41.075 "flush": true, 00:17:41.075 "reset": true, 00:17:41.075 "nvme_admin": false, 00:17:41.075 "nvme_io": false, 00:17:41.075 "nvme_io_md": false, 00:17:41.075 "write_zeroes": true, 00:17:41.075 "zcopy": true, 00:17:41.075 "get_zone_info": false, 00:17:41.075 "zone_management": false, 00:17:41.076 "zone_append": false, 00:17:41.076 "compare": false, 00:17:41.076 "compare_and_write": false, 00:17:41.076 "abort": true, 00:17:41.076 "seek_hole": false, 00:17:41.076 "seek_data": false, 00:17:41.076 "copy": true, 00:17:41.076 "nvme_iov_md": false 00:17:41.076 }, 00:17:41.076 "memory_domains": [ 00:17:41.076 { 00:17:41.076 "dma_device_id": "system", 00:17:41.076 "dma_device_type": 1 00:17:41.076 }, 00:17:41.076 { 00:17:41.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.076 "dma_device_type": 2 00:17:41.076 } 00:17:41.076 ], 00:17:41.076 "driver_specific": {} 00:17:41.076 }' 00:17:41.076 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.076 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.076 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.076 14:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.076 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.076 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.076 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.076 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.333 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:41.601 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.601 "name": "BaseBdev2", 00:17:41.601 "aliases": [ 00:17:41.601 "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c" 00:17:41.601 ], 00:17:41.601 "product_name": "Malloc disk", 00:17:41.601 "block_size": 512, 00:17:41.601 "num_blocks": 65536, 00:17:41.601 "uuid": "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c", 00:17:41.601 "assigned_rate_limits": { 00:17:41.601 "rw_ios_per_sec": 0, 00:17:41.601 "rw_mbytes_per_sec": 0, 00:17:41.601 "r_mbytes_per_sec": 0, 00:17:41.601 "w_mbytes_per_sec": 0 00:17:41.601 }, 00:17:41.601 "claimed": true, 00:17:41.601 "claim_type": "exclusive_write", 00:17:41.601 "zoned": false, 00:17:41.601 "supported_io_types": { 00:17:41.601 "read": true, 00:17:41.601 "write": true, 00:17:41.601 "unmap": true, 00:17:41.601 "flush": true, 00:17:41.601 "reset": true, 00:17:41.601 "nvme_admin": false, 00:17:41.601 "nvme_io": false, 00:17:41.601 "nvme_io_md": false, 00:17:41.601 "write_zeroes": true, 00:17:41.601 "zcopy": true, 00:17:41.601 "get_zone_info": false, 00:17:41.601 "zone_management": false, 00:17:41.601 "zone_append": false, 00:17:41.601 "compare": false, 00:17:41.601 "compare_and_write": false, 00:17:41.601 "abort": true, 00:17:41.601 "seek_hole": false, 00:17:41.601 "seek_data": false, 00:17:41.601 "copy": true, 00:17:41.601 "nvme_iov_md": false 00:17:41.601 }, 00:17:41.601 "memory_domains": [ 00:17:41.601 { 00:17:41.601 "dma_device_id": "system", 00:17:41.601 "dma_device_type": 1 00:17:41.601 }, 00:17:41.601 { 00:17:41.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.601 "dma_device_type": 2 00:17:41.601 } 00:17:41.601 ], 00:17:41.601 "driver_specific": {} 00:17:41.601 }' 00:17:41.601 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.601 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.601 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.601 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.862 14:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:42.119 [2024-07-25 14:00:31.154376] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.376 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.633 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.633 "name": "Existed_Raid", 00:17:42.633 "uuid": "74d187e9-4f18-45f1-a547-ac8636fddac8", 00:17:42.633 "strip_size_kb": 0, 00:17:42.633 "state": "online", 00:17:42.633 "raid_level": "raid1", 00:17:42.633 "superblock": true, 00:17:42.633 "num_base_bdevs": 2, 00:17:42.633 "num_base_bdevs_discovered": 1, 00:17:42.633 "num_base_bdevs_operational": 1, 00:17:42.633 "base_bdevs_list": [ 00:17:42.633 { 00:17:42.633 "name": null, 00:17:42.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.633 "is_configured": false, 00:17:42.633 "data_offset": 2048, 00:17:42.633 "data_size": 63488 00:17:42.633 }, 00:17:42.633 { 00:17:42.633 "name": "BaseBdev2", 00:17:42.633 "uuid": "4afff0ad-c1a7-407f-a13c-ed16aa6d4e7c", 00:17:42.633 "is_configured": true, 00:17:42.633 "data_offset": 2048, 00:17:42.633 "data_size": 63488 00:17:42.633 } 00:17:42.633 ] 00:17:42.633 }' 00:17:42.633 14:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.633 14:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.198 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:43.198 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:43.198 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.198 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:43.455 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:43.455 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.455 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:43.711 [2024-07-25 14:00:32.699531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.711 [2024-07-25 14:00:32.699677] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.968 [2024-07-25 14:00:32.786367] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.968 [2024-07-25 14:00:32.786425] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.968 [2024-07-25 14:00:32.786448] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:17:43.968 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:43.968 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:43.968 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:43.968 14:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123606 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 123606 ']' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 123606 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123606 00:17:44.226 killing process with pid 123606 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123606' 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 123606 00:17:44.226 14:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 123606 00:17:44.226 [2024-07-25 14:00:33.067937] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.226 [2024-07-25 14:00:33.068072] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.608 ************************************ 00:17:45.608 END TEST raid_state_function_test_sb 00:17:45.608 ************************************ 00:17:45.608 14:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:45.608 00:17:45.608 real 0m13.147s 00:17:45.608 user 0m23.344s 00:17:45.608 sys 0m1.516s 00:17:45.608 14:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.608 14:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.608 14:00:34 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:17:45.608 14:00:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:45.608 14:00:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.608 14:00:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.608 ************************************ 00:17:45.608 START TEST raid_superblock_test 00:17:45.608 ************************************ 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=124000 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 124000 /var/tmp/spdk-raid.sock 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 124000 ']' 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.608 14:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.608 [2024-07-25 14:00:34.344180] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:17:45.608 [2024-07-25 14:00:34.344430] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124000 ] 00:17:45.608 [2024-07-25 14:00:34.512506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.867 [2024-07-25 14:00:34.724187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.125 [2024-07-25 14:00:34.923026] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.383 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:46.641 malloc1 00:17:46.641 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.899 [2024-07-25 14:00:35.810860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.899 [2024-07-25 14:00:35.811023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.899 [2024-07-25 14:00:35.811072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:46.899 [2024-07-25 14:00:35.811098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.899 [2024-07-25 14:00:35.813783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.899 [2024-07-25 14:00:35.813867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.899 pt1 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.899 14:00:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:47.157 malloc2 00:17:47.157 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.416 [2024-07-25 14:00:36.440084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.416 [2024-07-25 14:00:36.440252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.416 [2024-07-25 14:00:36.440298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:47.416 [2024-07-25 14:00:36.440324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.416 [2024-07-25 14:00:36.442993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.416 [2024-07-25 14:00:36.443055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.416 pt2 00:17:47.416 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:47.416 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:47.416 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:47.994 [2024-07-25 14:00:36.748194] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.994 [2024-07-25 14:00:36.750462] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.994 [2024-07-25 14:00:36.750685] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:17:47.994 [2024-07-25 14:00:36.750712] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.994 [2024-07-25 14:00:36.750862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:47.994 [2024-07-25 14:00:36.751295] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:17:47.995 [2024-07-25 14:00:36.751320] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:17:47.995 [2024-07-25 14:00:36.751534] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.995 14:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.263 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.263 "name": "raid_bdev1", 00:17:48.263 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:48.263 "strip_size_kb": 0, 00:17:48.263 "state": "online", 00:17:48.263 "raid_level": "raid1", 00:17:48.263 "superblock": true, 00:17:48.263 "num_base_bdevs": 2, 00:17:48.263 "num_base_bdevs_discovered": 2, 00:17:48.263 "num_base_bdevs_operational": 2, 00:17:48.263 "base_bdevs_list": [ 00:17:48.263 { 00:17:48.263 "name": "pt1", 00:17:48.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.263 "is_configured": true, 00:17:48.263 "data_offset": 2048, 00:17:48.263 "data_size": 63488 00:17:48.263 }, 00:17:48.263 { 00:17:48.263 "name": "pt2", 00:17:48.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.263 "is_configured": true, 00:17:48.263 "data_offset": 2048, 00:17:48.263 "data_size": 63488 00:17:48.263 } 00:17:48.263 ] 00:17:48.263 }' 00:17:48.263 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.263 14:00:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.910 14:00:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:49.168 [2024-07-25 14:00:38.072757] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:49.168 "name": "raid_bdev1", 00:17:49.168 "aliases": [ 00:17:49.168 "87623285-61dc-4ab4-b155-91713816ce85" 00:17:49.168 ], 00:17:49.168 "product_name": "Raid Volume", 00:17:49.168 "block_size": 512, 00:17:49.168 "num_blocks": 63488, 00:17:49.168 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:49.168 "assigned_rate_limits": { 00:17:49.168 "rw_ios_per_sec": 0, 00:17:49.168 "rw_mbytes_per_sec": 0, 00:17:49.168 "r_mbytes_per_sec": 0, 00:17:49.168 "w_mbytes_per_sec": 0 00:17:49.168 }, 00:17:49.168 "claimed": false, 00:17:49.168 "zoned": false, 00:17:49.168 "supported_io_types": { 00:17:49.168 "read": true, 00:17:49.168 "write": true, 00:17:49.168 "unmap": false, 00:17:49.168 "flush": false, 00:17:49.168 "reset": true, 00:17:49.168 "nvme_admin": false, 00:17:49.168 "nvme_io": false, 00:17:49.168 "nvme_io_md": false, 00:17:49.168 "write_zeroes": true, 00:17:49.168 "zcopy": false, 00:17:49.168 "get_zone_info": false, 00:17:49.168 "zone_management": false, 00:17:49.168 "zone_append": false, 00:17:49.168 "compare": false, 00:17:49.168 "compare_and_write": false, 00:17:49.168 "abort": false, 00:17:49.168 "seek_hole": false, 00:17:49.168 "seek_data": false, 00:17:49.168 "copy": false, 00:17:49.168 "nvme_iov_md": false 00:17:49.168 }, 00:17:49.168 "memory_domains": [ 00:17:49.168 { 00:17:49.168 "dma_device_id": "system", 00:17:49.168 "dma_device_type": 1 00:17:49.168 }, 00:17:49.168 { 00:17:49.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.168 "dma_device_type": 2 00:17:49.168 }, 00:17:49.168 { 00:17:49.168 "dma_device_id": "system", 00:17:49.168 "dma_device_type": 1 00:17:49.168 }, 00:17:49.168 { 00:17:49.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.168 "dma_device_type": 2 00:17:49.168 } 00:17:49.168 ], 00:17:49.168 "driver_specific": { 00:17:49.168 "raid": { 00:17:49.168 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:49.168 "strip_size_kb": 0, 00:17:49.168 "state": "online", 00:17:49.168 "raid_level": "raid1", 00:17:49.168 "superblock": true, 00:17:49.168 "num_base_bdevs": 2, 00:17:49.168 "num_base_bdevs_discovered": 2, 00:17:49.168 "num_base_bdevs_operational": 2, 00:17:49.168 "base_bdevs_list": [ 00:17:49.168 { 00:17:49.168 "name": "pt1", 00:17:49.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.168 "is_configured": true, 00:17:49.168 "data_offset": 2048, 00:17:49.168 "data_size": 63488 00:17:49.168 }, 00:17:49.168 { 00:17:49.168 "name": "pt2", 00:17:49.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.168 "is_configured": true, 00:17:49.168 "data_offset": 2048, 00:17:49.168 "data_size": 63488 00:17:49.168 } 00:17:49.168 ] 00:17:49.168 } 00:17:49.168 } 00:17:49.168 }' 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:49.168 pt2' 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:49.168 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.426 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.426 "name": "pt1", 00:17:49.426 "aliases": [ 00:17:49.426 "00000000-0000-0000-0000-000000000001" 00:17:49.426 ], 00:17:49.426 "product_name": "passthru", 00:17:49.426 "block_size": 512, 00:17:49.426 "num_blocks": 65536, 00:17:49.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.426 "assigned_rate_limits": { 00:17:49.426 "rw_ios_per_sec": 0, 00:17:49.426 "rw_mbytes_per_sec": 0, 00:17:49.426 "r_mbytes_per_sec": 0, 00:17:49.426 "w_mbytes_per_sec": 0 00:17:49.426 }, 00:17:49.426 "claimed": true, 00:17:49.426 "claim_type": "exclusive_write", 00:17:49.426 "zoned": false, 00:17:49.426 "supported_io_types": { 00:17:49.426 "read": true, 00:17:49.426 "write": true, 00:17:49.426 "unmap": true, 00:17:49.426 "flush": true, 00:17:49.426 "reset": true, 00:17:49.426 "nvme_admin": false, 00:17:49.426 "nvme_io": false, 00:17:49.426 "nvme_io_md": false, 00:17:49.426 "write_zeroes": true, 00:17:49.426 "zcopy": true, 00:17:49.426 "get_zone_info": false, 00:17:49.426 "zone_management": false, 00:17:49.426 "zone_append": false, 00:17:49.426 "compare": false, 00:17:49.426 "compare_and_write": false, 00:17:49.426 "abort": true, 00:17:49.426 "seek_hole": false, 00:17:49.426 "seek_data": false, 00:17:49.426 "copy": true, 00:17:49.426 "nvme_iov_md": false 00:17:49.426 }, 00:17:49.426 "memory_domains": [ 00:17:49.426 { 00:17:49.426 "dma_device_id": "system", 00:17:49.426 "dma_device_type": 1 00:17:49.426 }, 00:17:49.426 { 00:17:49.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.426 "dma_device_type": 2 00:17:49.426 } 00:17:49.426 ], 00:17:49.426 "driver_specific": { 00:17:49.426 "passthru": { 00:17:49.426 "name": "pt1", 00:17:49.426 "base_bdev_name": "malloc1" 00:17:49.426 } 00:17:49.426 } 00:17:49.426 }' 00:17:49.426 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.684 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.947 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.947 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.947 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.947 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.947 14:00:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.208 "name": "pt2", 00:17:50.208 "aliases": [ 00:17:50.208 "00000000-0000-0000-0000-000000000002" 00:17:50.208 ], 00:17:50.208 "product_name": "passthru", 00:17:50.208 "block_size": 512, 00:17:50.208 "num_blocks": 65536, 00:17:50.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.208 "assigned_rate_limits": { 00:17:50.208 "rw_ios_per_sec": 0, 00:17:50.208 "rw_mbytes_per_sec": 0, 00:17:50.208 "r_mbytes_per_sec": 0, 00:17:50.208 "w_mbytes_per_sec": 0 00:17:50.208 }, 00:17:50.208 "claimed": true, 00:17:50.208 "claim_type": "exclusive_write", 00:17:50.208 "zoned": false, 00:17:50.208 "supported_io_types": { 00:17:50.208 "read": true, 00:17:50.208 "write": true, 00:17:50.208 "unmap": true, 00:17:50.208 "flush": true, 00:17:50.208 "reset": true, 00:17:50.208 "nvme_admin": false, 00:17:50.208 "nvme_io": false, 00:17:50.208 "nvme_io_md": false, 00:17:50.208 "write_zeroes": true, 00:17:50.208 "zcopy": true, 00:17:50.208 "get_zone_info": false, 00:17:50.208 "zone_management": false, 00:17:50.208 "zone_append": false, 00:17:50.208 "compare": false, 00:17:50.208 "compare_and_write": false, 00:17:50.208 "abort": true, 00:17:50.208 "seek_hole": false, 00:17:50.208 "seek_data": false, 00:17:50.208 "copy": true, 00:17:50.208 "nvme_iov_md": false 00:17:50.208 }, 00:17:50.208 "memory_domains": [ 00:17:50.208 { 00:17:50.208 "dma_device_id": "system", 00:17:50.208 "dma_device_type": 1 00:17:50.208 }, 00:17:50.208 { 00:17:50.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.208 "dma_device_type": 2 00:17:50.208 } 00:17:50.208 ], 00:17:50.208 "driver_specific": { 00:17:50.208 "passthru": { 00:17:50.208 "name": "pt2", 00:17:50.208 "base_bdev_name": "malloc2" 00:17:50.208 } 00:17:50.208 } 00:17:50.208 }' 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.208 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:50.467 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:50.726 [2024-07-25 14:00:39.701174] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.726 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=87623285-61dc-4ab4-b155-91713816ce85 00:17:50.726 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 87623285-61dc-4ab4-b155-91713816ce85 ']' 00:17:50.726 14:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.984 [2024-07-25 14:00:39.992904] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.984 [2024-07-25 14:00:39.992949] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.984 [2024-07-25 14:00:39.993059] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.984 [2024-07-25 14:00:39.993155] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.984 [2024-07-25 14:00:39.993171] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:17:50.984 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:50.984 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.241 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:51.241 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:51.241 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.241 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:51.499 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.499 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.757 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:51.757 14:00:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:52.121 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.379 [2024-07-25 14:00:41.249195] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.379 [2024-07-25 14:00:41.251537] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.379 [2024-07-25 14:00:41.251649] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:52.379 [2024-07-25 14:00:41.251796] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:52.379 [2024-07-25 14:00:41.251855] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.379 [2024-07-25 14:00:41.251868] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:17:52.379 request: 00:17:52.379 { 00:17:52.379 "name": "raid_bdev1", 00:17:52.379 "raid_level": "raid1", 00:17:52.379 "base_bdevs": [ 00:17:52.379 "malloc1", 00:17:52.379 "malloc2" 00:17:52.379 ], 00:17:52.379 "superblock": false, 00:17:52.379 "method": "bdev_raid_create", 00:17:52.379 "req_id": 1 00:17:52.379 } 00:17:52.379 Got JSON-RPC error response 00:17:52.379 response: 00:17:52.379 { 00:17:52.379 "code": -17, 00:17:52.379 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.379 } 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.379 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:52.638 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:52.638 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:52.638 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.895 [2024-07-25 14:00:41.773272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.895 [2024-07-25 14:00:41.773385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.895 [2024-07-25 14:00:41.773447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:52.895 [2024-07-25 14:00:41.773482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.895 [2024-07-25 14:00:41.776115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.895 [2024-07-25 14:00:41.776196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.895 [2024-07-25 14:00:41.776317] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.895 [2024-07-25 14:00:41.776372] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.895 pt1 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.895 14:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.152 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.152 "name": "raid_bdev1", 00:17:53.152 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:53.152 "strip_size_kb": 0, 00:17:53.153 "state": "configuring", 00:17:53.153 "raid_level": "raid1", 00:17:53.153 "superblock": true, 00:17:53.153 "num_base_bdevs": 2, 00:17:53.153 "num_base_bdevs_discovered": 1, 00:17:53.153 "num_base_bdevs_operational": 2, 00:17:53.153 "base_bdevs_list": [ 00:17:53.153 { 00:17:53.153 "name": "pt1", 00:17:53.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.153 "is_configured": true, 00:17:53.153 "data_offset": 2048, 00:17:53.153 "data_size": 63488 00:17:53.153 }, 00:17:53.153 { 00:17:53.153 "name": null, 00:17:53.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.153 "is_configured": false, 00:17:53.153 "data_offset": 2048, 00:17:53.153 "data_size": 63488 00:17:53.153 } 00:17:53.153 ] 00:17:53.153 }' 00:17:53.153 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.153 14:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.719 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:17:53.719 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:53.719 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:53.719 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.977 [2024-07-25 14:00:42.977574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.977 [2024-07-25 14:00:42.977712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.977 [2024-07-25 14:00:42.977756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:53.978 [2024-07-25 14:00:42.977801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.978 [2024-07-25 14:00:42.978354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.978 [2024-07-25 14:00:42.978419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.978 [2024-07-25 14:00:42.978544] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.978 [2024-07-25 14:00:42.978573] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.978 [2024-07-25 14:00:42.978711] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:53.978 [2024-07-25 14:00:42.978726] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:53.978 [2024-07-25 14:00:42.978834] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:53.978 [2024-07-25 14:00:42.979192] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:53.978 [2024-07-25 14:00:42.979219] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:53.978 [2024-07-25 14:00:42.979380] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.978 pt2 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.978 14:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.540 14:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.540 "name": "raid_bdev1", 00:17:54.540 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:54.540 "strip_size_kb": 0, 00:17:54.540 "state": "online", 00:17:54.540 "raid_level": "raid1", 00:17:54.540 "superblock": true, 00:17:54.540 "num_base_bdevs": 2, 00:17:54.540 "num_base_bdevs_discovered": 2, 00:17:54.540 "num_base_bdevs_operational": 2, 00:17:54.540 "base_bdevs_list": [ 00:17:54.540 { 00:17:54.540 "name": "pt1", 00:17:54.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.540 "is_configured": true, 00:17:54.540 "data_offset": 2048, 00:17:54.540 "data_size": 63488 00:17:54.540 }, 00:17:54.540 { 00:17:54.540 "name": "pt2", 00:17:54.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.540 "is_configured": true, 00:17:54.540 "data_offset": 2048, 00:17:54.540 "data_size": 63488 00:17:54.540 } 00:17:54.540 ] 00:17:54.540 }' 00:17:54.540 14:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.540 14:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.104 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:55.368 [2024-07-25 14:00:44.258162] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:55.368 "name": "raid_bdev1", 00:17:55.368 "aliases": [ 00:17:55.368 "87623285-61dc-4ab4-b155-91713816ce85" 00:17:55.368 ], 00:17:55.368 "product_name": "Raid Volume", 00:17:55.368 "block_size": 512, 00:17:55.368 "num_blocks": 63488, 00:17:55.368 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:55.368 "assigned_rate_limits": { 00:17:55.368 "rw_ios_per_sec": 0, 00:17:55.368 "rw_mbytes_per_sec": 0, 00:17:55.368 "r_mbytes_per_sec": 0, 00:17:55.368 "w_mbytes_per_sec": 0 00:17:55.368 }, 00:17:55.368 "claimed": false, 00:17:55.368 "zoned": false, 00:17:55.368 "supported_io_types": { 00:17:55.368 "read": true, 00:17:55.368 "write": true, 00:17:55.368 "unmap": false, 00:17:55.368 "flush": false, 00:17:55.368 "reset": true, 00:17:55.368 "nvme_admin": false, 00:17:55.368 "nvme_io": false, 00:17:55.368 "nvme_io_md": false, 00:17:55.368 "write_zeroes": true, 00:17:55.368 "zcopy": false, 00:17:55.368 "get_zone_info": false, 00:17:55.368 "zone_management": false, 00:17:55.368 "zone_append": false, 00:17:55.368 "compare": false, 00:17:55.368 "compare_and_write": false, 00:17:55.368 "abort": false, 00:17:55.368 "seek_hole": false, 00:17:55.368 "seek_data": false, 00:17:55.368 "copy": false, 00:17:55.368 "nvme_iov_md": false 00:17:55.368 }, 00:17:55.368 "memory_domains": [ 00:17:55.368 { 00:17:55.368 "dma_device_id": "system", 00:17:55.368 "dma_device_type": 1 00:17:55.368 }, 00:17:55.368 { 00:17:55.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.368 "dma_device_type": 2 00:17:55.368 }, 00:17:55.368 { 00:17:55.368 "dma_device_id": "system", 00:17:55.368 "dma_device_type": 1 00:17:55.368 }, 00:17:55.368 { 00:17:55.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.368 "dma_device_type": 2 00:17:55.368 } 00:17:55.368 ], 00:17:55.368 "driver_specific": { 00:17:55.368 "raid": { 00:17:55.368 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:55.368 "strip_size_kb": 0, 00:17:55.368 "state": "online", 00:17:55.368 "raid_level": "raid1", 00:17:55.368 "superblock": true, 00:17:55.368 "num_base_bdevs": 2, 00:17:55.368 "num_base_bdevs_discovered": 2, 00:17:55.368 "num_base_bdevs_operational": 2, 00:17:55.368 "base_bdevs_list": [ 00:17:55.368 { 00:17:55.368 "name": "pt1", 00:17:55.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.368 "is_configured": true, 00:17:55.368 "data_offset": 2048, 00:17:55.368 "data_size": 63488 00:17:55.368 }, 00:17:55.368 { 00:17:55.368 "name": "pt2", 00:17:55.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.368 "is_configured": true, 00:17:55.368 "data_offset": 2048, 00:17:55.368 "data_size": 63488 00:17:55.368 } 00:17:55.368 ] 00:17:55.368 } 00:17:55.368 } 00:17:55.368 }' 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:55.368 pt2' 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:55.368 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.635 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.635 "name": "pt1", 00:17:55.635 "aliases": [ 00:17:55.635 "00000000-0000-0000-0000-000000000001" 00:17:55.635 ], 00:17:55.635 "product_name": "passthru", 00:17:55.635 "block_size": 512, 00:17:55.635 "num_blocks": 65536, 00:17:55.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.635 "assigned_rate_limits": { 00:17:55.635 "rw_ios_per_sec": 0, 00:17:55.635 "rw_mbytes_per_sec": 0, 00:17:55.635 "r_mbytes_per_sec": 0, 00:17:55.635 "w_mbytes_per_sec": 0 00:17:55.635 }, 00:17:55.635 "claimed": true, 00:17:55.635 "claim_type": "exclusive_write", 00:17:55.635 "zoned": false, 00:17:55.635 "supported_io_types": { 00:17:55.635 "read": true, 00:17:55.635 "write": true, 00:17:55.635 "unmap": true, 00:17:55.635 "flush": true, 00:17:55.635 "reset": true, 00:17:55.635 "nvme_admin": false, 00:17:55.635 "nvme_io": false, 00:17:55.635 "nvme_io_md": false, 00:17:55.635 "write_zeroes": true, 00:17:55.635 "zcopy": true, 00:17:55.635 "get_zone_info": false, 00:17:55.635 "zone_management": false, 00:17:55.635 "zone_append": false, 00:17:55.635 "compare": false, 00:17:55.635 "compare_and_write": false, 00:17:55.635 "abort": true, 00:17:55.635 "seek_hole": false, 00:17:55.635 "seek_data": false, 00:17:55.635 "copy": true, 00:17:55.635 "nvme_iov_md": false 00:17:55.635 }, 00:17:55.635 "memory_domains": [ 00:17:55.635 { 00:17:55.635 "dma_device_id": "system", 00:17:55.635 "dma_device_type": 1 00:17:55.635 }, 00:17:55.635 { 00:17:55.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.635 "dma_device_type": 2 00:17:55.635 } 00:17:55.635 ], 00:17:55.635 "driver_specific": { 00:17:55.635 "passthru": { 00:17:55.635 "name": "pt1", 00:17:55.635 "base_bdev_name": "malloc1" 00:17:55.635 } 00:17:55.635 } 00:17:55.635 }' 00:17:55.635 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.635 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:55.893 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.894 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.156 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.156 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.156 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.156 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:56.156 14:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.416 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.416 "name": "pt2", 00:17:56.416 "aliases": [ 00:17:56.416 "00000000-0000-0000-0000-000000000002" 00:17:56.416 ], 00:17:56.416 "product_name": "passthru", 00:17:56.416 "block_size": 512, 00:17:56.416 "num_blocks": 65536, 00:17:56.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.416 "assigned_rate_limits": { 00:17:56.416 "rw_ios_per_sec": 0, 00:17:56.416 "rw_mbytes_per_sec": 0, 00:17:56.416 "r_mbytes_per_sec": 0, 00:17:56.416 "w_mbytes_per_sec": 0 00:17:56.416 }, 00:17:56.416 "claimed": true, 00:17:56.416 "claim_type": "exclusive_write", 00:17:56.416 "zoned": false, 00:17:56.416 "supported_io_types": { 00:17:56.416 "read": true, 00:17:56.416 "write": true, 00:17:56.416 "unmap": true, 00:17:56.416 "flush": true, 00:17:56.417 "reset": true, 00:17:56.417 "nvme_admin": false, 00:17:56.417 "nvme_io": false, 00:17:56.417 "nvme_io_md": false, 00:17:56.417 "write_zeroes": true, 00:17:56.417 "zcopy": true, 00:17:56.417 "get_zone_info": false, 00:17:56.417 "zone_management": false, 00:17:56.417 "zone_append": false, 00:17:56.417 "compare": false, 00:17:56.417 "compare_and_write": false, 00:17:56.417 "abort": true, 00:17:56.417 "seek_hole": false, 00:17:56.417 "seek_data": false, 00:17:56.417 "copy": true, 00:17:56.417 "nvme_iov_md": false 00:17:56.417 }, 00:17:56.417 "memory_domains": [ 00:17:56.417 { 00:17:56.417 "dma_device_id": "system", 00:17:56.417 "dma_device_type": 1 00:17:56.417 }, 00:17:56.417 { 00:17:56.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.417 "dma_device_type": 2 00:17:56.417 } 00:17:56.417 ], 00:17:56.417 "driver_specific": { 00:17:56.417 "passthru": { 00:17:56.417 "name": "pt2", 00:17:56.417 "base_bdev_name": "malloc2" 00:17:56.417 } 00:17:56.417 } 00:17:56.417 }' 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.417 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:56.674 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:56.931 [2024-07-25 14:00:45.850542] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.931 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 87623285-61dc-4ab4-b155-91713816ce85 '!=' 87623285-61dc-4ab4-b155-91713816ce85 ']' 00:17:56.931 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:17:56.931 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:56.931 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:56.931 14:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:57.189 [2024-07-25 14:00:46.150381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.189 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.446 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.446 "name": "raid_bdev1", 00:17:57.446 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:17:57.446 "strip_size_kb": 0, 00:17:57.446 "state": "online", 00:17:57.446 "raid_level": "raid1", 00:17:57.446 "superblock": true, 00:17:57.446 "num_base_bdevs": 2, 00:17:57.446 "num_base_bdevs_discovered": 1, 00:17:57.446 "num_base_bdevs_operational": 1, 00:17:57.446 "base_bdevs_list": [ 00:17:57.446 { 00:17:57.446 "name": null, 00:17:57.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.446 "is_configured": false, 00:17:57.446 "data_offset": 2048, 00:17:57.446 "data_size": 63488 00:17:57.446 }, 00:17:57.446 { 00:17:57.446 "name": "pt2", 00:17:57.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.446 "is_configured": true, 00:17:57.446 "data_offset": 2048, 00:17:57.446 "data_size": 63488 00:17:57.446 } 00:17:57.446 ] 00:17:57.446 }' 00:17:57.446 14:00:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.446 14:00:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.819 14:00:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:59.076 [2024-07-25 14:00:47.986823] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.076 [2024-07-25 14:00:47.986874] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.076 [2024-07-25 14:00:47.986975] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.077 [2024-07-25 14:00:47.987043] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.077 [2024-07-25 14:00:47.987056] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:59.077 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:17:59.077 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.334 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:17:59.334 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:17:59.334 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:59.334 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:59.334 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:17:59.591 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.849 [2024-07-25 14:00:48.786981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.849 [2024-07-25 14:00:48.787117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.849 [2024-07-25 14:00:48.787168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:59.849 [2024-07-25 14:00:48.787202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.849 [2024-07-25 14:00:48.789917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.849 [2024-07-25 14:00:48.789998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.849 [2024-07-25 14:00:48.790126] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:59.849 [2024-07-25 14:00:48.790195] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.849 [2024-07-25 14:00:48.790362] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:17:59.849 [2024-07-25 14:00:48.790380] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.849 [2024-07-25 14:00:48.790482] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:59.849 [2024-07-25 14:00:48.790836] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:17:59.849 [2024-07-25 14:00:48.790853] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:17:59.849 [2024-07-25 14:00:48.791056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.849 pt2 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.849 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.107 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.107 "name": "raid_bdev1", 00:18:00.107 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:18:00.107 "strip_size_kb": 0, 00:18:00.107 "state": "online", 00:18:00.107 "raid_level": "raid1", 00:18:00.107 "superblock": true, 00:18:00.107 "num_base_bdevs": 2, 00:18:00.107 "num_base_bdevs_discovered": 1, 00:18:00.107 "num_base_bdevs_operational": 1, 00:18:00.107 "base_bdevs_list": [ 00:18:00.107 { 00:18:00.107 "name": null, 00:18:00.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.107 "is_configured": false, 00:18:00.107 "data_offset": 2048, 00:18:00.107 "data_size": 63488 00:18:00.107 }, 00:18:00.107 { 00:18:00.107 "name": "pt2", 00:18:00.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.107 "is_configured": true, 00:18:00.107 "data_offset": 2048, 00:18:00.107 "data_size": 63488 00:18:00.107 } 00:18:00.107 ] 00:18:00.107 }' 00:18:00.107 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.107 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.040 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.040 [2024-07-25 14:00:50.027295] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.040 [2024-07-25 14:00:50.027347] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.040 [2024-07-25 14:00:50.027443] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.040 [2024-07-25 14:00:50.027506] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.040 [2024-07-25 14:00:50.027519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:18:01.040 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:18:01.040 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.297 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:18:01.297 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:18:01.297 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:18:01.297 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.554 [2024-07-25 14:00:50.523448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.554 [2024-07-25 14:00:50.523571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.554 [2024-07-25 14:00:50.523628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:01.554 [2024-07-25 14:00:50.523656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.554 [2024-07-25 14:00:50.526337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.554 [2024-07-25 14:00:50.526410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.554 [2024-07-25 14:00:50.526536] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.554 [2024-07-25 14:00:50.526591] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.554 [2024-07-25 14:00:50.526756] bdev_raid.c:3743:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:01.554 [2024-07-25 14:00:50.526773] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.554 [2024-07-25 14:00:50.526791] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:18:01.554 [2024-07-25 14:00:50.526864] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.554 [2024-07-25 14:00:50.526952] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:18:01.554 [2024-07-25 14:00:50.526966] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:01.554 [2024-07-25 14:00:50.527076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.554 [2024-07-25 14:00:50.527431] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:18:01.554 [2024-07-25 14:00:50.527458] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:18:01.554 [2024-07-25 14:00:50.527666] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.554 pt1 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.554 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.812 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.812 "name": "raid_bdev1", 00:18:01.812 "uuid": "87623285-61dc-4ab4-b155-91713816ce85", 00:18:01.812 "strip_size_kb": 0, 00:18:01.812 "state": "online", 00:18:01.812 "raid_level": "raid1", 00:18:01.812 "superblock": true, 00:18:01.812 "num_base_bdevs": 2, 00:18:01.812 "num_base_bdevs_discovered": 1, 00:18:01.812 "num_base_bdevs_operational": 1, 00:18:01.812 "base_bdevs_list": [ 00:18:01.812 { 00:18:01.812 "name": null, 00:18:01.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.812 "is_configured": false, 00:18:01.812 "data_offset": 2048, 00:18:01.812 "data_size": 63488 00:18:01.812 }, 00:18:01.812 { 00:18:01.812 "name": "pt2", 00:18:01.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.812 "is_configured": true, 00:18:01.812 "data_offset": 2048, 00:18:01.812 "data_size": 63488 00:18:01.812 } 00:18:01.812 ] 00:18:01.812 }' 00:18:01.812 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.812 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.743 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:02.743 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.000 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:18:03.000 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:03.000 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:18:03.000 [2024-07-25 14:00:52.042407] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 87623285-61dc-4ab4-b155-91713816ce85 '!=' 87623285-61dc-4ab4-b155-91713816ce85 ']' 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 124000 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 124000 ']' 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 124000 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124000 00:18:03.258 killing process with pid 124000 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124000' 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 124000 00:18:03.258 [2024-07-25 14:00:52.083919] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.258 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 124000 00:18:03.258 [2024-07-25 14:00:52.084041] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.258 [2024-07-25 14:00:52.084116] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.258 [2024-07-25 14:00:52.084129] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:18:03.258 [2024-07-25 14:00:52.252267] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.631 ************************************ 00:18:04.631 END TEST raid_superblock_test 00:18:04.631 ************************************ 00:18:04.631 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:04.631 00:18:04.631 real 0m19.113s 00:18:04.631 user 0m35.173s 00:18:04.631 sys 0m2.119s 00:18:04.631 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.631 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.631 14:00:53 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:18:04.631 14:00:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:04.631 14:00:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:04.631 14:00:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.631 ************************************ 00:18:04.631 START TEST raid_read_error_test 00:18:04.631 ************************************ 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.FHLuwuSZW5 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=124566 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 124566 /var/tmp/spdk-raid.sock 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 124566 ']' 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:04.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.631 14:00:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.631 [2024-07-25 14:00:53.521907] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:04.631 [2024-07-25 14:00:53.522182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124566 ] 00:18:04.888 [2024-07-25 14:00:53.724511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.144 [2024-07-25 14:00:53.966188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.144 [2024-07-25 14:00:54.169519] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.709 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.709 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:05.709 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:18:05.709 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:05.967 BaseBdev1_malloc 00:18:05.967 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:06.225 true 00:18:06.225 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:06.483 [2024-07-25 14:00:55.385048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:06.483 [2024-07-25 14:00:55.385193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.483 [2024-07-25 14:00:55.385254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:06.483 [2024-07-25 14:00:55.385281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.483 [2024-07-25 14:00:55.388133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.483 [2024-07-25 14:00:55.388193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.483 BaseBdev1 00:18:06.483 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:18:06.483 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:06.742 BaseBdev2_malloc 00:18:06.742 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:07.000 true 00:18:07.000 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:07.258 [2024-07-25 14:00:56.203730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:07.258 [2024-07-25 14:00:56.203893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.258 [2024-07-25 14:00:56.203945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:07.258 [2024-07-25 14:00:56.203972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.258 [2024-07-25 14:00:56.206636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.258 [2024-07-25 14:00:56.206696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.258 BaseBdev2 00:18:07.258 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:07.516 [2024-07-25 14:00:56.511895] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.516 [2024-07-25 14:00:56.514209] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.516 [2024-07-25 14:00:56.514516] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:18:07.516 [2024-07-25 14:00:56.514546] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:07.516 [2024-07-25 14:00:56.514689] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:07.516 [2024-07-25 14:00:56.515164] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:18:07.516 [2024-07-25 14:00:56.515191] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:18:07.516 [2024-07-25 14:00:56.515407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.516 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.774 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.774 "name": "raid_bdev1", 00:18:07.774 "uuid": "b8cad10f-c92b-46d8-918e-e77cb4b17b64", 00:18:07.774 "strip_size_kb": 0, 00:18:07.774 "state": "online", 00:18:07.774 "raid_level": "raid1", 00:18:07.774 "superblock": true, 00:18:07.774 "num_base_bdevs": 2, 00:18:07.774 "num_base_bdevs_discovered": 2, 00:18:07.774 "num_base_bdevs_operational": 2, 00:18:07.774 "base_bdevs_list": [ 00:18:07.774 { 00:18:07.774 "name": "BaseBdev1", 00:18:07.774 "uuid": "2559f40e-8e63-50f4-b3f1-c37c46bf59f5", 00:18:07.774 "is_configured": true, 00:18:07.774 "data_offset": 2048, 00:18:07.774 "data_size": 63488 00:18:07.774 }, 00:18:07.774 { 00:18:07.774 "name": "BaseBdev2", 00:18:07.774 "uuid": "0f59ca6a-0f6d-5c4e-9eb2-a404c1a5c438", 00:18:07.774 "is_configured": true, 00:18:07.774 "data_offset": 2048, 00:18:07.774 "data_size": 63488 00:18:07.774 } 00:18:07.774 ] 00:18:07.774 }' 00:18:07.774 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.774 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.709 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:18:08.709 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:08.709 [2024-07-25 14:00:57.525393] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:09.642 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ read = \w\r\i\t\e ]] 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=2 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.900 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.158 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.158 "name": "raid_bdev1", 00:18:10.158 "uuid": "b8cad10f-c92b-46d8-918e-e77cb4b17b64", 00:18:10.158 "strip_size_kb": 0, 00:18:10.158 "state": "online", 00:18:10.158 "raid_level": "raid1", 00:18:10.158 "superblock": true, 00:18:10.158 "num_base_bdevs": 2, 00:18:10.158 "num_base_bdevs_discovered": 2, 00:18:10.158 "num_base_bdevs_operational": 2, 00:18:10.158 "base_bdevs_list": [ 00:18:10.158 { 00:18:10.158 "name": "BaseBdev1", 00:18:10.158 "uuid": "2559f40e-8e63-50f4-b3f1-c37c46bf59f5", 00:18:10.158 "is_configured": true, 00:18:10.158 "data_offset": 2048, 00:18:10.158 "data_size": 63488 00:18:10.158 }, 00:18:10.158 { 00:18:10.158 "name": "BaseBdev2", 00:18:10.158 "uuid": "0f59ca6a-0f6d-5c4e-9eb2-a404c1a5c438", 00:18:10.158 "is_configured": true, 00:18:10.158 "data_offset": 2048, 00:18:10.158 "data_size": 63488 00:18:10.158 } 00:18:10.158 ] 00:18:10.158 }' 00:18:10.158 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.158 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.724 14:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:10.993 [2024-07-25 14:00:59.858204] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.993 [2024-07-25 14:00:59.858261] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.993 [2024-07-25 14:00:59.861438] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.993 [2024-07-25 14:00:59.861553] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.993 [2024-07-25 14:00:59.861698] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.993 [2024-07-25 14:00:59.861731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:18:10.993 0 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 124566 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 124566 ']' 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 124566 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124566 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124566' 00:18:10.993 killing process with pid 124566 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 124566 00:18:10.993 14:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 124566 00:18:10.993 [2024-07-25 14:00:59.899068] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.993 [2024-07-25 14:01:00.011643] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.FHLuwuSZW5 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:18:12.375 ************************************ 00:18:12.375 END TEST raid_read_error_test 00:18:12.375 ************************************ 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:12.375 00:18:12.375 real 0m7.786s 00:18:12.375 user 0m11.856s 00:18:12.375 sys 0m0.924s 00:18:12.375 14:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.376 14:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 14:01:01 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:18:12.376 14:01:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:12.376 14:01:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.376 14:01:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 ************************************ 00:18:12.376 START TEST raid_write_error_test 00:18:12.376 ************************************ 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=2 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.CLeENxPncH 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=124762 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 124762 /var/tmp/spdk-raid.sock 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 124762 ']' 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.376 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 [2024-07-25 14:01:01.343090] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:12.376 [2024-07-25 14:01:01.343340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124762 ] 00:18:12.634 [2024-07-25 14:01:01.500979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.893 [2024-07-25 14:01:01.715576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.893 [2024-07-25 14:01:01.916070] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.459 14:01:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.459 14:01:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:13.459 14:01:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:18:13.459 14:01:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:13.724 BaseBdev1_malloc 00:18:13.724 14:01:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:13.984 true 00:18:13.984 14:01:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:14.244 [2024-07-25 14:01:03.162442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:14.244 [2024-07-25 14:01:03.162592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.244 [2024-07-25 14:01:03.162648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:14.244 [2024-07-25 14:01:03.162675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.244 [2024-07-25 14:01:03.165352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.244 [2024-07-25 14:01:03.165412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.244 BaseBdev1 00:18:14.244 14:01:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:18:14.244 14:01:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:14.525 BaseBdev2_malloc 00:18:14.525 14:01:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:14.786 true 00:18:15.045 14:01:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:15.045 [2024-07-25 14:01:04.063453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:15.045 [2024-07-25 14:01:04.063622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.045 [2024-07-25 14:01:04.063692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:15.045 [2024-07-25 14:01:04.063718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.045 [2024-07-25 14:01:04.066468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.045 [2024-07-25 14:01:04.066538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:15.045 BaseBdev2 00:18:15.045 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:18:15.613 [2024-07-25 14:01:04.367574] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.613 [2024-07-25 14:01:04.369906] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.613 [2024-07-25 14:01:04.370191] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:18:15.613 [2024-07-25 14:01:04.370210] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:15.613 [2024-07-25 14:01:04.370363] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:15.613 [2024-07-25 14:01:04.370809] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:18:15.613 [2024-07-25 14:01:04.370836] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:18:15.613 [2024-07-25 14:01:04.371063] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.613 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.872 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.872 "name": "raid_bdev1", 00:18:15.872 "uuid": "e34ff42a-93b3-4386-b350-d8ab192bbbf5", 00:18:15.872 "strip_size_kb": 0, 00:18:15.872 "state": "online", 00:18:15.872 "raid_level": "raid1", 00:18:15.872 "superblock": true, 00:18:15.872 "num_base_bdevs": 2, 00:18:15.872 "num_base_bdevs_discovered": 2, 00:18:15.872 "num_base_bdevs_operational": 2, 00:18:15.872 "base_bdevs_list": [ 00:18:15.872 { 00:18:15.872 "name": "BaseBdev1", 00:18:15.872 "uuid": "2f5c44c7-db6f-5c18-a4dc-f81554a25289", 00:18:15.872 "is_configured": true, 00:18:15.872 "data_offset": 2048, 00:18:15.872 "data_size": 63488 00:18:15.872 }, 00:18:15.872 { 00:18:15.872 "name": "BaseBdev2", 00:18:15.872 "uuid": "96becd71-6684-5033-9017-250efb53a621", 00:18:15.872 "is_configured": true, 00:18:15.872 "data_offset": 2048, 00:18:15.872 "data_size": 63488 00:18:15.872 } 00:18:15.872 ] 00:18:15.872 }' 00:18:15.872 14:01:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.872 14:01:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.438 14:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:16.438 14:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:18:16.438 [2024-07-25 14:01:05.417118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:17.373 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:17.632 [2024-07-25 14:01:06.609270] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:17.632 [2024-07-25 14:01:06.609417] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.632 [2024-07-25 14:01:06.609685] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ write = \w\r\i\t\e ]] 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@921 -- # expected_num_base_bdevs=1 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.632 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.892 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.892 "name": "raid_bdev1", 00:18:17.892 "uuid": "e34ff42a-93b3-4386-b350-d8ab192bbbf5", 00:18:17.892 "strip_size_kb": 0, 00:18:17.892 "state": "online", 00:18:17.892 "raid_level": "raid1", 00:18:17.892 "superblock": true, 00:18:17.892 "num_base_bdevs": 2, 00:18:17.892 "num_base_bdevs_discovered": 1, 00:18:17.892 "num_base_bdevs_operational": 1, 00:18:17.892 "base_bdevs_list": [ 00:18:17.892 { 00:18:17.892 "name": null, 00:18:17.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.892 "is_configured": false, 00:18:17.892 "data_offset": 2048, 00:18:17.892 "data_size": 63488 00:18:17.892 }, 00:18:17.892 { 00:18:17.892 "name": "BaseBdev2", 00:18:17.892 "uuid": "96becd71-6684-5033-9017-250efb53a621", 00:18:17.892 "is_configured": true, 00:18:17.892 "data_offset": 2048, 00:18:17.892 "data_size": 63488 00:18:17.892 } 00:18:17.892 ] 00:18:17.892 }' 00:18:17.892 14:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.892 14:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.827 14:01:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:18.827 [2024-07-25 14:01:07.817224] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.828 [2024-07-25 14:01:07.817281] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.828 [2024-07-25 14:01:07.820334] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.828 [2024-07-25 14:01:07.820405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.828 [2024-07-25 14:01:07.820466] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.828 [2024-07-25 14:01:07.820479] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:18:18.828 0 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 124762 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 124762 ']' 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 124762 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124762 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124762' 00:18:18.828 killing process with pid 124762 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 124762 00:18:18.828 14:01:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 124762 00:18:18.828 [2024-07-25 14:01:07.860607] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.086 [2024-07-25 14:01:07.975403] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.CLeENxPncH 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:18:20.462 ************************************ 00:18:20.462 END TEST raid_write_error_test 00:18:20.462 ************************************ 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:20.462 00:18:20.462 real 0m7.938s 00:18:20.462 user 0m12.164s 00:18:20.462 sys 0m0.852s 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.462 14:01:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.462 14:01:09 bdev_raid -- bdev/bdev_raid.sh@1019 -- # for n in {2..4} 00:18:20.462 14:01:09 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:18:20.462 14:01:09 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:20.462 14:01:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:20.462 14:01:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.462 14:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.462 ************************************ 00:18:20.462 START TEST raid_state_function_test 00:18:20.462 ************************************ 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124957 00:18:20.462 Process raid pid: 124957 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124957' 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124957 /var/tmp/spdk-raid.sock 00:18:20.462 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 124957 ']' 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.463 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.463 [2024-07-25 14:01:09.348870] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:20.463 [2024-07-25 14:01:09.349080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.721 [2024-07-25 14:01:09.522845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.721 [2024-07-25 14:01:09.752105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.981 [2024-07-25 14:01:09.961826] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.548 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.548 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:21.548 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:21.806 [2024-07-25 14:01:10.657882] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.806 [2024-07-25 14:01:10.658026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.806 [2024-07-25 14:01:10.658043] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.807 [2024-07-25 14:01:10.658074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.807 [2024-07-25 14:01:10.658083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.807 [2024-07-25 14:01:10.658101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.807 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.065 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.065 "name": "Existed_Raid", 00:18:22.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.065 "strip_size_kb": 64, 00:18:22.065 "state": "configuring", 00:18:22.065 "raid_level": "raid0", 00:18:22.065 "superblock": false, 00:18:22.065 "num_base_bdevs": 3, 00:18:22.065 "num_base_bdevs_discovered": 0, 00:18:22.065 "num_base_bdevs_operational": 3, 00:18:22.065 "base_bdevs_list": [ 00:18:22.065 { 00:18:22.065 "name": "BaseBdev1", 00:18:22.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.065 "is_configured": false, 00:18:22.065 "data_offset": 0, 00:18:22.065 "data_size": 0 00:18:22.065 }, 00:18:22.065 { 00:18:22.065 "name": "BaseBdev2", 00:18:22.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.065 "is_configured": false, 00:18:22.065 "data_offset": 0, 00:18:22.065 "data_size": 0 00:18:22.065 }, 00:18:22.065 { 00:18:22.065 "name": "BaseBdev3", 00:18:22.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.065 "is_configured": false, 00:18:22.065 "data_offset": 0, 00:18:22.065 "data_size": 0 00:18:22.065 } 00:18:22.065 ] 00:18:22.065 }' 00:18:22.065 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.065 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.632 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.199 [2024-07-25 14:01:11.962542] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.199 [2024-07-25 14:01:11.962618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:18:23.199 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:23.199 [2024-07-25 14:01:12.230614] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.199 [2024-07-25 14:01:12.230737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.199 [2024-07-25 14:01:12.230753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.199 [2024-07-25 14:01:12.230773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.199 [2024-07-25 14:01:12.230782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.199 [2024-07-25 14:01:12.230808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.458 14:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.715 [2024-07-25 14:01:12.556306] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.715 BaseBdev1 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:23.715 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.974 14:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.233 [ 00:18:24.233 { 00:18:24.233 "name": "BaseBdev1", 00:18:24.233 "aliases": [ 00:18:24.233 "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4" 00:18:24.233 ], 00:18:24.233 "product_name": "Malloc disk", 00:18:24.233 "block_size": 512, 00:18:24.233 "num_blocks": 65536, 00:18:24.233 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:24.233 "assigned_rate_limits": { 00:18:24.233 "rw_ios_per_sec": 0, 00:18:24.233 "rw_mbytes_per_sec": 0, 00:18:24.233 "r_mbytes_per_sec": 0, 00:18:24.233 "w_mbytes_per_sec": 0 00:18:24.233 }, 00:18:24.233 "claimed": true, 00:18:24.233 "claim_type": "exclusive_write", 00:18:24.233 "zoned": false, 00:18:24.233 "supported_io_types": { 00:18:24.233 "read": true, 00:18:24.233 "write": true, 00:18:24.233 "unmap": true, 00:18:24.233 "flush": true, 00:18:24.233 "reset": true, 00:18:24.233 "nvme_admin": false, 00:18:24.233 "nvme_io": false, 00:18:24.233 "nvme_io_md": false, 00:18:24.233 "write_zeroes": true, 00:18:24.233 "zcopy": true, 00:18:24.233 "get_zone_info": false, 00:18:24.233 "zone_management": false, 00:18:24.233 "zone_append": false, 00:18:24.233 "compare": false, 00:18:24.233 "compare_and_write": false, 00:18:24.233 "abort": true, 00:18:24.233 "seek_hole": false, 00:18:24.233 "seek_data": false, 00:18:24.233 "copy": true, 00:18:24.233 "nvme_iov_md": false 00:18:24.233 }, 00:18:24.233 "memory_domains": [ 00:18:24.233 { 00:18:24.233 "dma_device_id": "system", 00:18:24.233 "dma_device_type": 1 00:18:24.233 }, 00:18:24.233 { 00:18:24.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.233 "dma_device_type": 2 00:18:24.233 } 00:18:24.233 ], 00:18:24.233 "driver_specific": {} 00:18:24.233 } 00:18:24.233 ] 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.233 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.489 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.489 "name": "Existed_Raid", 00:18:24.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.489 "strip_size_kb": 64, 00:18:24.489 "state": "configuring", 00:18:24.489 "raid_level": "raid0", 00:18:24.489 "superblock": false, 00:18:24.489 "num_base_bdevs": 3, 00:18:24.489 "num_base_bdevs_discovered": 1, 00:18:24.489 "num_base_bdevs_operational": 3, 00:18:24.489 "base_bdevs_list": [ 00:18:24.489 { 00:18:24.489 "name": "BaseBdev1", 00:18:24.489 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:24.489 "is_configured": true, 00:18:24.489 "data_offset": 0, 00:18:24.489 "data_size": 65536 00:18:24.489 }, 00:18:24.489 { 00:18:24.489 "name": "BaseBdev2", 00:18:24.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.489 "is_configured": false, 00:18:24.489 "data_offset": 0, 00:18:24.489 "data_size": 0 00:18:24.489 }, 00:18:24.489 { 00:18:24.489 "name": "BaseBdev3", 00:18:24.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.489 "is_configured": false, 00:18:24.489 "data_offset": 0, 00:18:24.489 "data_size": 0 00:18:24.489 } 00:18:24.489 ] 00:18:24.489 }' 00:18:24.489 14:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.489 14:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.053 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.311 [2024-07-25 14:01:14.316868] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.311 [2024-07-25 14:01:14.316961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:18:25.311 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:25.570 [2024-07-25 14:01:14.600963] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.570 [2024-07-25 14:01:14.603228] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.570 [2024-07-25 14:01:14.603318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.570 [2024-07-25 14:01:14.603333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.570 [2024-07-25 14:01:14.603378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.829 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.087 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.087 "name": "Existed_Raid", 00:18:26.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.087 "strip_size_kb": 64, 00:18:26.087 "state": "configuring", 00:18:26.087 "raid_level": "raid0", 00:18:26.087 "superblock": false, 00:18:26.087 "num_base_bdevs": 3, 00:18:26.087 "num_base_bdevs_discovered": 1, 00:18:26.087 "num_base_bdevs_operational": 3, 00:18:26.087 "base_bdevs_list": [ 00:18:26.087 { 00:18:26.087 "name": "BaseBdev1", 00:18:26.087 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:26.087 "is_configured": true, 00:18:26.087 "data_offset": 0, 00:18:26.087 "data_size": 65536 00:18:26.087 }, 00:18:26.087 { 00:18:26.087 "name": "BaseBdev2", 00:18:26.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.087 "is_configured": false, 00:18:26.087 "data_offset": 0, 00:18:26.088 "data_size": 0 00:18:26.088 }, 00:18:26.088 { 00:18:26.088 "name": "BaseBdev3", 00:18:26.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.088 "is_configured": false, 00:18:26.088 "data_offset": 0, 00:18:26.088 "data_size": 0 00:18:26.088 } 00:18:26.088 ] 00:18:26.088 }' 00:18:26.088 14:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.088 14:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.654 14:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:26.913 [2024-07-25 14:01:15.811235] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.913 BaseBdev2 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.913 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.914 14:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.171 14:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:27.429 [ 00:18:27.429 { 00:18:27.429 "name": "BaseBdev2", 00:18:27.429 "aliases": [ 00:18:27.429 "4ba60437-b6dd-496d-bd2c-8acbe02b7979" 00:18:27.429 ], 00:18:27.429 "product_name": "Malloc disk", 00:18:27.429 "block_size": 512, 00:18:27.429 "num_blocks": 65536, 00:18:27.429 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:27.429 "assigned_rate_limits": { 00:18:27.429 "rw_ios_per_sec": 0, 00:18:27.429 "rw_mbytes_per_sec": 0, 00:18:27.429 "r_mbytes_per_sec": 0, 00:18:27.429 "w_mbytes_per_sec": 0 00:18:27.429 }, 00:18:27.429 "claimed": true, 00:18:27.429 "claim_type": "exclusive_write", 00:18:27.429 "zoned": false, 00:18:27.429 "supported_io_types": { 00:18:27.429 "read": true, 00:18:27.429 "write": true, 00:18:27.429 "unmap": true, 00:18:27.430 "flush": true, 00:18:27.430 "reset": true, 00:18:27.430 "nvme_admin": false, 00:18:27.430 "nvme_io": false, 00:18:27.430 "nvme_io_md": false, 00:18:27.430 "write_zeroes": true, 00:18:27.430 "zcopy": true, 00:18:27.430 "get_zone_info": false, 00:18:27.430 "zone_management": false, 00:18:27.430 "zone_append": false, 00:18:27.430 "compare": false, 00:18:27.430 "compare_and_write": false, 00:18:27.430 "abort": true, 00:18:27.430 "seek_hole": false, 00:18:27.430 "seek_data": false, 00:18:27.430 "copy": true, 00:18:27.430 "nvme_iov_md": false 00:18:27.430 }, 00:18:27.430 "memory_domains": [ 00:18:27.430 { 00:18:27.430 "dma_device_id": "system", 00:18:27.430 "dma_device_type": 1 00:18:27.430 }, 00:18:27.430 { 00:18:27.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.430 "dma_device_type": 2 00:18:27.430 } 00:18:27.430 ], 00:18:27.430 "driver_specific": {} 00:18:27.430 } 00:18:27.430 ] 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.430 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.689 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.689 "name": "Existed_Raid", 00:18:27.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.689 "strip_size_kb": 64, 00:18:27.689 "state": "configuring", 00:18:27.689 "raid_level": "raid0", 00:18:27.689 "superblock": false, 00:18:27.689 "num_base_bdevs": 3, 00:18:27.689 "num_base_bdevs_discovered": 2, 00:18:27.689 "num_base_bdevs_operational": 3, 00:18:27.689 "base_bdevs_list": [ 00:18:27.689 { 00:18:27.689 "name": "BaseBdev1", 00:18:27.689 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:27.689 "is_configured": true, 00:18:27.689 "data_offset": 0, 00:18:27.689 "data_size": 65536 00:18:27.689 }, 00:18:27.689 { 00:18:27.689 "name": "BaseBdev2", 00:18:27.689 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:27.689 "is_configured": true, 00:18:27.689 "data_offset": 0, 00:18:27.689 "data_size": 65536 00:18:27.689 }, 00:18:27.689 { 00:18:27.689 "name": "BaseBdev3", 00:18:27.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.689 "is_configured": false, 00:18:27.689 "data_offset": 0, 00:18:27.689 "data_size": 0 00:18:27.689 } 00:18:27.689 ] 00:18:27.689 }' 00:18:27.689 14:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.689 14:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.644 [2024-07-25 14:01:17.604517] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.644 [2024-07-25 14:01:17.604606] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:18:28.644 [2024-07-25 14:01:17.604618] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:28.644 [2024-07-25 14:01:17.604760] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:28.644 [2024-07-25 14:01:17.605190] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:18:28.644 [2024-07-25 14:01:17.605219] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:18:28.644 [2024-07-25 14:01:17.605502] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.644 BaseBdev3 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.644 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.902 14:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:29.160 [ 00:18:29.160 { 00:18:29.160 "name": "BaseBdev3", 00:18:29.160 "aliases": [ 00:18:29.160 "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a" 00:18:29.160 ], 00:18:29.160 "product_name": "Malloc disk", 00:18:29.160 "block_size": 512, 00:18:29.160 "num_blocks": 65536, 00:18:29.160 "uuid": "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a", 00:18:29.160 "assigned_rate_limits": { 00:18:29.160 "rw_ios_per_sec": 0, 00:18:29.160 "rw_mbytes_per_sec": 0, 00:18:29.160 "r_mbytes_per_sec": 0, 00:18:29.160 "w_mbytes_per_sec": 0 00:18:29.160 }, 00:18:29.160 "claimed": true, 00:18:29.160 "claim_type": "exclusive_write", 00:18:29.160 "zoned": false, 00:18:29.160 "supported_io_types": { 00:18:29.160 "read": true, 00:18:29.161 "write": true, 00:18:29.161 "unmap": true, 00:18:29.161 "flush": true, 00:18:29.161 "reset": true, 00:18:29.161 "nvme_admin": false, 00:18:29.161 "nvme_io": false, 00:18:29.161 "nvme_io_md": false, 00:18:29.161 "write_zeroes": true, 00:18:29.161 "zcopy": true, 00:18:29.161 "get_zone_info": false, 00:18:29.161 "zone_management": false, 00:18:29.161 "zone_append": false, 00:18:29.161 "compare": false, 00:18:29.161 "compare_and_write": false, 00:18:29.161 "abort": true, 00:18:29.161 "seek_hole": false, 00:18:29.161 "seek_data": false, 00:18:29.161 "copy": true, 00:18:29.161 "nvme_iov_md": false 00:18:29.161 }, 00:18:29.161 "memory_domains": [ 00:18:29.161 { 00:18:29.161 "dma_device_id": "system", 00:18:29.161 "dma_device_type": 1 00:18:29.161 }, 00:18:29.161 { 00:18:29.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.161 "dma_device_type": 2 00:18:29.161 } 00:18:29.161 ], 00:18:29.161 "driver_specific": {} 00:18:29.161 } 00:18:29.161 ] 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.161 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.419 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.676 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.676 "name": "Existed_Raid", 00:18:29.676 "uuid": "53400f38-5bfa-450e-9c74-6dafcd3afee7", 00:18:29.676 "strip_size_kb": 64, 00:18:29.676 "state": "online", 00:18:29.676 "raid_level": "raid0", 00:18:29.676 "superblock": false, 00:18:29.676 "num_base_bdevs": 3, 00:18:29.676 "num_base_bdevs_discovered": 3, 00:18:29.676 "num_base_bdevs_operational": 3, 00:18:29.676 "base_bdevs_list": [ 00:18:29.676 { 00:18:29.676 "name": "BaseBdev1", 00:18:29.676 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:29.676 "is_configured": true, 00:18:29.676 "data_offset": 0, 00:18:29.676 "data_size": 65536 00:18:29.676 }, 00:18:29.676 { 00:18:29.676 "name": "BaseBdev2", 00:18:29.676 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:29.676 "is_configured": true, 00:18:29.676 "data_offset": 0, 00:18:29.676 "data_size": 65536 00:18:29.676 }, 00:18:29.676 { 00:18:29.676 "name": "BaseBdev3", 00:18:29.676 "uuid": "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a", 00:18:29.676 "is_configured": true, 00:18:29.676 "data_offset": 0, 00:18:29.676 "data_size": 65536 00:18:29.676 } 00:18:29.676 ] 00:18:29.676 }' 00:18:29.676 14:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.676 14:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:30.241 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:30.499 [2024-07-25 14:01:19.529400] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.757 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:30.757 "name": "Existed_Raid", 00:18:30.757 "aliases": [ 00:18:30.757 "53400f38-5bfa-450e-9c74-6dafcd3afee7" 00:18:30.757 ], 00:18:30.757 "product_name": "Raid Volume", 00:18:30.757 "block_size": 512, 00:18:30.757 "num_blocks": 196608, 00:18:30.757 "uuid": "53400f38-5bfa-450e-9c74-6dafcd3afee7", 00:18:30.757 "assigned_rate_limits": { 00:18:30.757 "rw_ios_per_sec": 0, 00:18:30.757 "rw_mbytes_per_sec": 0, 00:18:30.757 "r_mbytes_per_sec": 0, 00:18:30.757 "w_mbytes_per_sec": 0 00:18:30.757 }, 00:18:30.757 "claimed": false, 00:18:30.757 "zoned": false, 00:18:30.757 "supported_io_types": { 00:18:30.757 "read": true, 00:18:30.757 "write": true, 00:18:30.757 "unmap": true, 00:18:30.757 "flush": true, 00:18:30.757 "reset": true, 00:18:30.757 "nvme_admin": false, 00:18:30.757 "nvme_io": false, 00:18:30.757 "nvme_io_md": false, 00:18:30.757 "write_zeroes": true, 00:18:30.757 "zcopy": false, 00:18:30.757 "get_zone_info": false, 00:18:30.757 "zone_management": false, 00:18:30.757 "zone_append": false, 00:18:30.757 "compare": false, 00:18:30.757 "compare_and_write": false, 00:18:30.757 "abort": false, 00:18:30.757 "seek_hole": false, 00:18:30.757 "seek_data": false, 00:18:30.757 "copy": false, 00:18:30.757 "nvme_iov_md": false 00:18:30.757 }, 00:18:30.757 "memory_domains": [ 00:18:30.757 { 00:18:30.757 "dma_device_id": "system", 00:18:30.757 "dma_device_type": 1 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.757 "dma_device_type": 2 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "dma_device_id": "system", 00:18:30.757 "dma_device_type": 1 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.757 "dma_device_type": 2 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "dma_device_id": "system", 00:18:30.757 "dma_device_type": 1 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.757 "dma_device_type": 2 00:18:30.757 } 00:18:30.757 ], 00:18:30.757 "driver_specific": { 00:18:30.757 "raid": { 00:18:30.757 "uuid": "53400f38-5bfa-450e-9c74-6dafcd3afee7", 00:18:30.757 "strip_size_kb": 64, 00:18:30.757 "state": "online", 00:18:30.757 "raid_level": "raid0", 00:18:30.757 "superblock": false, 00:18:30.757 "num_base_bdevs": 3, 00:18:30.757 "num_base_bdevs_discovered": 3, 00:18:30.757 "num_base_bdevs_operational": 3, 00:18:30.757 "base_bdevs_list": [ 00:18:30.757 { 00:18:30.757 "name": "BaseBdev1", 00:18:30.757 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:30.758 "is_configured": true, 00:18:30.758 "data_offset": 0, 00:18:30.758 "data_size": 65536 00:18:30.758 }, 00:18:30.758 { 00:18:30.758 "name": "BaseBdev2", 00:18:30.758 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:30.758 "is_configured": true, 00:18:30.758 "data_offset": 0, 00:18:30.758 "data_size": 65536 00:18:30.758 }, 00:18:30.758 { 00:18:30.758 "name": "BaseBdev3", 00:18:30.758 "uuid": "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a", 00:18:30.758 "is_configured": true, 00:18:30.758 "data_offset": 0, 00:18:30.758 "data_size": 65536 00:18:30.758 } 00:18:30.758 ] 00:18:30.758 } 00:18:30.758 } 00:18:30.758 }' 00:18:30.758 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.758 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:30.758 BaseBdev2 00:18:30.758 BaseBdev3' 00:18:30.758 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.758 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.758 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:31.015 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.015 "name": "BaseBdev1", 00:18:31.015 "aliases": [ 00:18:31.015 "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4" 00:18:31.015 ], 00:18:31.015 "product_name": "Malloc disk", 00:18:31.015 "block_size": 512, 00:18:31.015 "num_blocks": 65536, 00:18:31.015 "uuid": "ee20ac3d-1ee2-460e-89fd-fefbb3b7d3f4", 00:18:31.015 "assigned_rate_limits": { 00:18:31.015 "rw_ios_per_sec": 0, 00:18:31.015 "rw_mbytes_per_sec": 0, 00:18:31.015 "r_mbytes_per_sec": 0, 00:18:31.015 "w_mbytes_per_sec": 0 00:18:31.015 }, 00:18:31.015 "claimed": true, 00:18:31.015 "claim_type": "exclusive_write", 00:18:31.015 "zoned": false, 00:18:31.015 "supported_io_types": { 00:18:31.015 "read": true, 00:18:31.015 "write": true, 00:18:31.015 "unmap": true, 00:18:31.015 "flush": true, 00:18:31.015 "reset": true, 00:18:31.015 "nvme_admin": false, 00:18:31.015 "nvme_io": false, 00:18:31.015 "nvme_io_md": false, 00:18:31.015 "write_zeroes": true, 00:18:31.015 "zcopy": true, 00:18:31.015 "get_zone_info": false, 00:18:31.015 "zone_management": false, 00:18:31.015 "zone_append": false, 00:18:31.015 "compare": false, 00:18:31.015 "compare_and_write": false, 00:18:31.015 "abort": true, 00:18:31.015 "seek_hole": false, 00:18:31.015 "seek_data": false, 00:18:31.015 "copy": true, 00:18:31.015 "nvme_iov_md": false 00:18:31.015 }, 00:18:31.015 "memory_domains": [ 00:18:31.015 { 00:18:31.015 "dma_device_id": "system", 00:18:31.015 "dma_device_type": 1 00:18:31.015 }, 00:18:31.015 { 00:18:31.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.015 "dma_device_type": 2 00:18:31.015 } 00:18:31.015 ], 00:18:31.015 "driver_specific": {} 00:18:31.015 }' 00:18:31.015 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.015 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.015 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.015 14:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.015 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:31.273 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.840 "name": "BaseBdev2", 00:18:31.840 "aliases": [ 00:18:31.840 "4ba60437-b6dd-496d-bd2c-8acbe02b7979" 00:18:31.840 ], 00:18:31.840 "product_name": "Malloc disk", 00:18:31.840 "block_size": 512, 00:18:31.840 "num_blocks": 65536, 00:18:31.840 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:31.840 "assigned_rate_limits": { 00:18:31.840 "rw_ios_per_sec": 0, 00:18:31.840 "rw_mbytes_per_sec": 0, 00:18:31.840 "r_mbytes_per_sec": 0, 00:18:31.840 "w_mbytes_per_sec": 0 00:18:31.840 }, 00:18:31.840 "claimed": true, 00:18:31.840 "claim_type": "exclusive_write", 00:18:31.840 "zoned": false, 00:18:31.840 "supported_io_types": { 00:18:31.840 "read": true, 00:18:31.840 "write": true, 00:18:31.840 "unmap": true, 00:18:31.840 "flush": true, 00:18:31.840 "reset": true, 00:18:31.840 "nvme_admin": false, 00:18:31.840 "nvme_io": false, 00:18:31.840 "nvme_io_md": false, 00:18:31.840 "write_zeroes": true, 00:18:31.840 "zcopy": true, 00:18:31.840 "get_zone_info": false, 00:18:31.840 "zone_management": false, 00:18:31.840 "zone_append": false, 00:18:31.840 "compare": false, 00:18:31.840 "compare_and_write": false, 00:18:31.840 "abort": true, 00:18:31.840 "seek_hole": false, 00:18:31.840 "seek_data": false, 00:18:31.840 "copy": true, 00:18:31.840 "nvme_iov_md": false 00:18:31.840 }, 00:18:31.840 "memory_domains": [ 00:18:31.840 { 00:18:31.840 "dma_device_id": "system", 00:18:31.840 "dma_device_type": 1 00:18:31.840 }, 00:18:31.840 { 00:18:31.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.840 "dma_device_type": 2 00:18:31.840 } 00:18:31.840 ], 00:18:31.840 "driver_specific": {} 00:18:31.840 }' 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.840 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.098 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.098 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.098 14:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.098 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.098 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.098 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:32.098 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.393 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.393 "name": "BaseBdev3", 00:18:32.393 "aliases": [ 00:18:32.393 "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a" 00:18:32.393 ], 00:18:32.393 "product_name": "Malloc disk", 00:18:32.393 "block_size": 512, 00:18:32.393 "num_blocks": 65536, 00:18:32.393 "uuid": "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a", 00:18:32.393 "assigned_rate_limits": { 00:18:32.393 "rw_ios_per_sec": 0, 00:18:32.393 "rw_mbytes_per_sec": 0, 00:18:32.393 "r_mbytes_per_sec": 0, 00:18:32.393 "w_mbytes_per_sec": 0 00:18:32.393 }, 00:18:32.393 "claimed": true, 00:18:32.393 "claim_type": "exclusive_write", 00:18:32.393 "zoned": false, 00:18:32.393 "supported_io_types": { 00:18:32.393 "read": true, 00:18:32.393 "write": true, 00:18:32.393 "unmap": true, 00:18:32.393 "flush": true, 00:18:32.393 "reset": true, 00:18:32.393 "nvme_admin": false, 00:18:32.393 "nvme_io": false, 00:18:32.393 "nvme_io_md": false, 00:18:32.393 "write_zeroes": true, 00:18:32.393 "zcopy": true, 00:18:32.393 "get_zone_info": false, 00:18:32.393 "zone_management": false, 00:18:32.393 "zone_append": false, 00:18:32.393 "compare": false, 00:18:32.393 "compare_and_write": false, 00:18:32.393 "abort": true, 00:18:32.393 "seek_hole": false, 00:18:32.393 "seek_data": false, 00:18:32.393 "copy": true, 00:18:32.393 "nvme_iov_md": false 00:18:32.393 }, 00:18:32.393 "memory_domains": [ 00:18:32.393 { 00:18:32.393 "dma_device_id": "system", 00:18:32.393 "dma_device_type": 1 00:18:32.393 }, 00:18:32.393 { 00:18:32.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.393 "dma_device_type": 2 00:18:32.393 } 00:18:32.393 ], 00:18:32.393 "driver_specific": {} 00:18:32.393 }' 00:18:32.393 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.393 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.651 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.910 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.910 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.910 14:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:33.168 [2024-07-25 14:01:22.045952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.168 [2024-07-25 14:01:22.046004] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.168 [2024-07-25 14:01:22.046065] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.168 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.733 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.733 "name": "Existed_Raid", 00:18:33.733 "uuid": "53400f38-5bfa-450e-9c74-6dafcd3afee7", 00:18:33.733 "strip_size_kb": 64, 00:18:33.733 "state": "offline", 00:18:33.733 "raid_level": "raid0", 00:18:33.733 "superblock": false, 00:18:33.733 "num_base_bdevs": 3, 00:18:33.733 "num_base_bdevs_discovered": 2, 00:18:33.733 "num_base_bdevs_operational": 2, 00:18:33.733 "base_bdevs_list": [ 00:18:33.733 { 00:18:33.733 "name": null, 00:18:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.733 "is_configured": false, 00:18:33.733 "data_offset": 0, 00:18:33.733 "data_size": 65536 00:18:33.733 }, 00:18:33.733 { 00:18:33.733 "name": "BaseBdev2", 00:18:33.733 "uuid": "4ba60437-b6dd-496d-bd2c-8acbe02b7979", 00:18:33.733 "is_configured": true, 00:18:33.733 "data_offset": 0, 00:18:33.733 "data_size": 65536 00:18:33.733 }, 00:18:33.733 { 00:18:33.733 "name": "BaseBdev3", 00:18:33.733 "uuid": "e6f6bb2b-7d25-4f9e-8cb1-9dd0f8e4682a", 00:18:33.733 "is_configured": true, 00:18:33.733 "data_offset": 0, 00:18:33.733 "data_size": 65536 00:18:33.733 } 00:18:33.733 ] 00:18:33.733 }' 00:18:33.733 14:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.733 14:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.298 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:34.298 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.298 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.298 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:34.575 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:34.575 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.575 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:34.834 [2024-07-25 14:01:23.715152] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.835 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:34.835 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.835 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.835 14:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:35.093 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:35.093 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.093 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:35.352 [2024-07-25 14:01:24.367657] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:35.352 [2024-07-25 14:01:24.367747] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:18:35.610 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:35.610 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:35.610 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.610 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:35.869 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.170 BaseBdev2 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.170 14:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.443 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.701 [ 00:18:36.701 { 00:18:36.701 "name": "BaseBdev2", 00:18:36.701 "aliases": [ 00:18:36.701 "0746db65-0e3c-478f-bb4b-701047313080" 00:18:36.701 ], 00:18:36.701 "product_name": "Malloc disk", 00:18:36.701 "block_size": 512, 00:18:36.701 "num_blocks": 65536, 00:18:36.701 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:36.701 "assigned_rate_limits": { 00:18:36.701 "rw_ios_per_sec": 0, 00:18:36.701 "rw_mbytes_per_sec": 0, 00:18:36.701 "r_mbytes_per_sec": 0, 00:18:36.701 "w_mbytes_per_sec": 0 00:18:36.701 }, 00:18:36.701 "claimed": false, 00:18:36.701 "zoned": false, 00:18:36.701 "supported_io_types": { 00:18:36.701 "read": true, 00:18:36.701 "write": true, 00:18:36.701 "unmap": true, 00:18:36.701 "flush": true, 00:18:36.701 "reset": true, 00:18:36.701 "nvme_admin": false, 00:18:36.701 "nvme_io": false, 00:18:36.701 "nvme_io_md": false, 00:18:36.701 "write_zeroes": true, 00:18:36.701 "zcopy": true, 00:18:36.701 "get_zone_info": false, 00:18:36.701 "zone_management": false, 00:18:36.701 "zone_append": false, 00:18:36.701 "compare": false, 00:18:36.701 "compare_and_write": false, 00:18:36.701 "abort": true, 00:18:36.701 "seek_hole": false, 00:18:36.701 "seek_data": false, 00:18:36.701 "copy": true, 00:18:36.701 "nvme_iov_md": false 00:18:36.701 }, 00:18:36.701 "memory_domains": [ 00:18:36.701 { 00:18:36.701 "dma_device_id": "system", 00:18:36.701 "dma_device_type": 1 00:18:36.701 }, 00:18:36.701 { 00:18:36.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.701 "dma_device_type": 2 00:18:36.701 } 00:18:36.701 ], 00:18:36.701 "driver_specific": {} 00:18:36.701 } 00:18:36.701 ] 00:18:36.701 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:36.701 14:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:36.701 14:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:36.701 14:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:36.960 BaseBdev3 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.960 14:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.218 14:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:37.477 [ 00:18:37.477 { 00:18:37.477 "name": "BaseBdev3", 00:18:37.477 "aliases": [ 00:18:37.477 "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3" 00:18:37.477 ], 00:18:37.477 "product_name": "Malloc disk", 00:18:37.477 "block_size": 512, 00:18:37.477 "num_blocks": 65536, 00:18:37.477 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:37.477 "assigned_rate_limits": { 00:18:37.477 "rw_ios_per_sec": 0, 00:18:37.477 "rw_mbytes_per_sec": 0, 00:18:37.477 "r_mbytes_per_sec": 0, 00:18:37.477 "w_mbytes_per_sec": 0 00:18:37.477 }, 00:18:37.477 "claimed": false, 00:18:37.477 "zoned": false, 00:18:37.477 "supported_io_types": { 00:18:37.477 "read": true, 00:18:37.477 "write": true, 00:18:37.477 "unmap": true, 00:18:37.477 "flush": true, 00:18:37.477 "reset": true, 00:18:37.477 "nvme_admin": false, 00:18:37.477 "nvme_io": false, 00:18:37.477 "nvme_io_md": false, 00:18:37.477 "write_zeroes": true, 00:18:37.477 "zcopy": true, 00:18:37.477 "get_zone_info": false, 00:18:37.477 "zone_management": false, 00:18:37.477 "zone_append": false, 00:18:37.477 "compare": false, 00:18:37.477 "compare_and_write": false, 00:18:37.477 "abort": true, 00:18:37.477 "seek_hole": false, 00:18:37.477 "seek_data": false, 00:18:37.477 "copy": true, 00:18:37.477 "nvme_iov_md": false 00:18:37.477 }, 00:18:37.477 "memory_domains": [ 00:18:37.477 { 00:18:37.477 "dma_device_id": "system", 00:18:37.477 "dma_device_type": 1 00:18:37.477 }, 00:18:37.477 { 00:18:37.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.477 "dma_device_type": 2 00:18:37.477 } 00:18:37.477 ], 00:18:37.477 "driver_specific": {} 00:18:37.477 } 00:18:37.477 ] 00:18:37.477 14:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:37.477 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:37.477 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:37.477 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:37.736 [2024-07-25 14:01:26.675131] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.736 [2024-07-25 14:01:26.675233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.736 [2024-07-25 14:01:26.675292] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.736 [2024-07-25 14:01:26.677449] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.736 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.995 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.995 "name": "Existed_Raid", 00:18:37.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.995 "strip_size_kb": 64, 00:18:37.995 "state": "configuring", 00:18:37.995 "raid_level": "raid0", 00:18:37.995 "superblock": false, 00:18:37.995 "num_base_bdevs": 3, 00:18:37.995 "num_base_bdevs_discovered": 2, 00:18:37.995 "num_base_bdevs_operational": 3, 00:18:37.995 "base_bdevs_list": [ 00:18:37.995 { 00:18:37.995 "name": "BaseBdev1", 00:18:37.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.995 "is_configured": false, 00:18:37.995 "data_offset": 0, 00:18:37.995 "data_size": 0 00:18:37.995 }, 00:18:37.995 { 00:18:37.995 "name": "BaseBdev2", 00:18:37.995 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:37.995 "is_configured": true, 00:18:37.995 "data_offset": 0, 00:18:37.995 "data_size": 65536 00:18:37.995 }, 00:18:37.995 { 00:18:37.995 "name": "BaseBdev3", 00:18:37.995 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:37.995 "is_configured": true, 00:18:37.995 "data_offset": 0, 00:18:37.995 "data_size": 65536 00:18:37.995 } 00:18:37.995 ] 00:18:37.995 }' 00:18:37.995 14:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.995 14:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.561 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:38.820 [2024-07-25 14:01:27.827412] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.820 14:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.386 14:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.386 "name": "Existed_Raid", 00:18:39.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.386 "strip_size_kb": 64, 00:18:39.386 "state": "configuring", 00:18:39.386 "raid_level": "raid0", 00:18:39.386 "superblock": false, 00:18:39.386 "num_base_bdevs": 3, 00:18:39.386 "num_base_bdevs_discovered": 1, 00:18:39.386 "num_base_bdevs_operational": 3, 00:18:39.386 "base_bdevs_list": [ 00:18:39.386 { 00:18:39.386 "name": "BaseBdev1", 00:18:39.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.386 "is_configured": false, 00:18:39.386 "data_offset": 0, 00:18:39.386 "data_size": 0 00:18:39.386 }, 00:18:39.386 { 00:18:39.386 "name": null, 00:18:39.386 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:39.386 "is_configured": false, 00:18:39.386 "data_offset": 0, 00:18:39.386 "data_size": 65536 00:18:39.386 }, 00:18:39.386 { 00:18:39.386 "name": "BaseBdev3", 00:18:39.386 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:39.386 "is_configured": true, 00:18:39.386 "data_offset": 0, 00:18:39.386 "data_size": 65536 00:18:39.386 } 00:18:39.386 ] 00:18:39.386 }' 00:18:39.386 14:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.386 14:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.952 14:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.952 14:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:40.210 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:40.210 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.468 [2024-07-25 14:01:29.347209] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.468 BaseBdev1 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.468 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.726 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.984 [ 00:18:40.984 { 00:18:40.984 "name": "BaseBdev1", 00:18:40.984 "aliases": [ 00:18:40.984 "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2" 00:18:40.984 ], 00:18:40.984 "product_name": "Malloc disk", 00:18:40.984 "block_size": 512, 00:18:40.984 "num_blocks": 65536, 00:18:40.984 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:40.984 "assigned_rate_limits": { 00:18:40.984 "rw_ios_per_sec": 0, 00:18:40.984 "rw_mbytes_per_sec": 0, 00:18:40.984 "r_mbytes_per_sec": 0, 00:18:40.984 "w_mbytes_per_sec": 0 00:18:40.984 }, 00:18:40.984 "claimed": true, 00:18:40.984 "claim_type": "exclusive_write", 00:18:40.984 "zoned": false, 00:18:40.984 "supported_io_types": { 00:18:40.984 "read": true, 00:18:40.984 "write": true, 00:18:40.984 "unmap": true, 00:18:40.984 "flush": true, 00:18:40.984 "reset": true, 00:18:40.984 "nvme_admin": false, 00:18:40.984 "nvme_io": false, 00:18:40.984 "nvme_io_md": false, 00:18:40.984 "write_zeroes": true, 00:18:40.984 "zcopy": true, 00:18:40.984 "get_zone_info": false, 00:18:40.984 "zone_management": false, 00:18:40.984 "zone_append": false, 00:18:40.984 "compare": false, 00:18:40.984 "compare_and_write": false, 00:18:40.984 "abort": true, 00:18:40.984 "seek_hole": false, 00:18:40.984 "seek_data": false, 00:18:40.984 "copy": true, 00:18:40.984 "nvme_iov_md": false 00:18:40.984 }, 00:18:40.984 "memory_domains": [ 00:18:40.984 { 00:18:40.984 "dma_device_id": "system", 00:18:40.984 "dma_device_type": 1 00:18:40.984 }, 00:18:40.984 { 00:18:40.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.984 "dma_device_type": 2 00:18:40.984 } 00:18:40.984 ], 00:18:40.984 "driver_specific": {} 00:18:40.984 } 00:18:40.984 ] 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.984 14:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.243 14:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.243 "name": "Existed_Raid", 00:18:41.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.243 "strip_size_kb": 64, 00:18:41.243 "state": "configuring", 00:18:41.243 "raid_level": "raid0", 00:18:41.243 "superblock": false, 00:18:41.243 "num_base_bdevs": 3, 00:18:41.243 "num_base_bdevs_discovered": 2, 00:18:41.243 "num_base_bdevs_operational": 3, 00:18:41.243 "base_bdevs_list": [ 00:18:41.243 { 00:18:41.243 "name": "BaseBdev1", 00:18:41.243 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:41.243 "is_configured": true, 00:18:41.243 "data_offset": 0, 00:18:41.243 "data_size": 65536 00:18:41.243 }, 00:18:41.243 { 00:18:41.243 "name": null, 00:18:41.243 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:41.243 "is_configured": false, 00:18:41.243 "data_offset": 0, 00:18:41.243 "data_size": 65536 00:18:41.243 }, 00:18:41.243 { 00:18:41.243 "name": "BaseBdev3", 00:18:41.243 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:41.243 "is_configured": true, 00:18:41.243 "data_offset": 0, 00:18:41.243 "data_size": 65536 00:18:41.243 } 00:18:41.243 ] 00:18:41.243 }' 00:18:41.243 14:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.243 14:01:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.176 14:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:42.176 14:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.176 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:42.176 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:42.433 [2024-07-25 14:01:31.431915] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.433 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.434 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.434 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.434 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.434 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.434 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.692 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.692 "name": "Existed_Raid", 00:18:42.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.692 "strip_size_kb": 64, 00:18:42.692 "state": "configuring", 00:18:42.692 "raid_level": "raid0", 00:18:42.692 "superblock": false, 00:18:42.692 "num_base_bdevs": 3, 00:18:42.692 "num_base_bdevs_discovered": 1, 00:18:42.692 "num_base_bdevs_operational": 3, 00:18:42.692 "base_bdevs_list": [ 00:18:42.692 { 00:18:42.692 "name": "BaseBdev1", 00:18:42.692 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:42.692 "is_configured": true, 00:18:42.692 "data_offset": 0, 00:18:42.692 "data_size": 65536 00:18:42.692 }, 00:18:42.692 { 00:18:42.692 "name": null, 00:18:42.692 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:42.692 "is_configured": false, 00:18:42.692 "data_offset": 0, 00:18:42.692 "data_size": 65536 00:18:42.692 }, 00:18:42.692 { 00:18:42.692 "name": null, 00:18:42.692 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:42.692 "is_configured": false, 00:18:42.692 "data_offset": 0, 00:18:42.692 "data_size": 65536 00:18:42.692 } 00:18:42.692 ] 00:18:42.692 }' 00:18:42.692 14:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.692 14:01:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.626 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.626 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:43.885 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:43.885 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:44.144 [2024-07-25 14:01:32.984342] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.144 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.144 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.144 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.403 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.403 "name": "Existed_Raid", 00:18:44.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.403 "strip_size_kb": 64, 00:18:44.403 "state": "configuring", 00:18:44.403 "raid_level": "raid0", 00:18:44.403 "superblock": false, 00:18:44.403 "num_base_bdevs": 3, 00:18:44.403 "num_base_bdevs_discovered": 2, 00:18:44.403 "num_base_bdevs_operational": 3, 00:18:44.403 "base_bdevs_list": [ 00:18:44.403 { 00:18:44.403 "name": "BaseBdev1", 00:18:44.403 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:44.403 "is_configured": true, 00:18:44.403 "data_offset": 0, 00:18:44.403 "data_size": 65536 00:18:44.403 }, 00:18:44.403 { 00:18:44.403 "name": null, 00:18:44.403 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:44.403 "is_configured": false, 00:18:44.403 "data_offset": 0, 00:18:44.403 "data_size": 65536 00:18:44.403 }, 00:18:44.403 { 00:18:44.403 "name": "BaseBdev3", 00:18:44.403 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:44.403 "is_configured": true, 00:18:44.403 "data_offset": 0, 00:18:44.403 "data_size": 65536 00:18:44.403 } 00:18:44.403 ] 00:18:44.403 }' 00:18:44.403 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.403 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.336 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.336 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.336 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:45.336 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:45.595 [2024-07-25 14:01:34.564792] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.853 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.112 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.112 "name": "Existed_Raid", 00:18:46.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.112 "strip_size_kb": 64, 00:18:46.112 "state": "configuring", 00:18:46.112 "raid_level": "raid0", 00:18:46.112 "superblock": false, 00:18:46.112 "num_base_bdevs": 3, 00:18:46.112 "num_base_bdevs_discovered": 1, 00:18:46.112 "num_base_bdevs_operational": 3, 00:18:46.112 "base_bdevs_list": [ 00:18:46.112 { 00:18:46.112 "name": null, 00:18:46.112 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:46.112 "is_configured": false, 00:18:46.112 "data_offset": 0, 00:18:46.112 "data_size": 65536 00:18:46.112 }, 00:18:46.112 { 00:18:46.112 "name": null, 00:18:46.112 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:46.112 "is_configured": false, 00:18:46.112 "data_offset": 0, 00:18:46.112 "data_size": 65536 00:18:46.112 }, 00:18:46.112 { 00:18:46.112 "name": "BaseBdev3", 00:18:46.112 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:46.112 "is_configured": true, 00:18:46.112 "data_offset": 0, 00:18:46.112 "data_size": 65536 00:18:46.112 } 00:18:46.112 ] 00:18:46.112 }' 00:18:46.112 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.112 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.679 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.679 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.990 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:46.990 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:47.249 [2024-07-25 14:01:36.164996] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.249 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.507 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.507 "name": "Existed_Raid", 00:18:47.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.507 "strip_size_kb": 64, 00:18:47.507 "state": "configuring", 00:18:47.507 "raid_level": "raid0", 00:18:47.507 "superblock": false, 00:18:47.507 "num_base_bdevs": 3, 00:18:47.507 "num_base_bdevs_discovered": 2, 00:18:47.507 "num_base_bdevs_operational": 3, 00:18:47.507 "base_bdevs_list": [ 00:18:47.507 { 00:18:47.507 "name": null, 00:18:47.507 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:47.507 "is_configured": false, 00:18:47.507 "data_offset": 0, 00:18:47.507 "data_size": 65536 00:18:47.507 }, 00:18:47.507 { 00:18:47.507 "name": "BaseBdev2", 00:18:47.507 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:47.507 "is_configured": true, 00:18:47.507 "data_offset": 0, 00:18:47.507 "data_size": 65536 00:18:47.507 }, 00:18:47.507 { 00:18:47.507 "name": "BaseBdev3", 00:18:47.507 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:47.507 "is_configured": true, 00:18:47.507 "data_offset": 0, 00:18:47.508 "data_size": 65536 00:18:47.508 } 00:18:47.508 ] 00:18:47.508 }' 00:18:47.508 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.508 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.075 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.075 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:48.334 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:48.334 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:48.334 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.901 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2 00:18:49.159 [2024-07-25 14:01:37.971033] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:49.159 [2024-07-25 14:01:37.971114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:18:49.159 [2024-07-25 14:01:37.971125] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:49.159 [2024-07-25 14:01:37.971270] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:49.159 [2024-07-25 14:01:37.971627] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:18:49.159 [2024-07-25 14:01:37.971655] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:18:49.159 [2024-07-25 14:01:37.971910] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.159 NewBaseBdev 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.159 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.417 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:49.676 [ 00:18:49.676 { 00:18:49.676 "name": "NewBaseBdev", 00:18:49.676 "aliases": [ 00:18:49.676 "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2" 00:18:49.676 ], 00:18:49.676 "product_name": "Malloc disk", 00:18:49.676 "block_size": 512, 00:18:49.676 "num_blocks": 65536, 00:18:49.676 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:49.676 "assigned_rate_limits": { 00:18:49.676 "rw_ios_per_sec": 0, 00:18:49.676 "rw_mbytes_per_sec": 0, 00:18:49.676 "r_mbytes_per_sec": 0, 00:18:49.676 "w_mbytes_per_sec": 0 00:18:49.676 }, 00:18:49.676 "claimed": true, 00:18:49.676 "claim_type": "exclusive_write", 00:18:49.676 "zoned": false, 00:18:49.676 "supported_io_types": { 00:18:49.676 "read": true, 00:18:49.676 "write": true, 00:18:49.676 "unmap": true, 00:18:49.676 "flush": true, 00:18:49.676 "reset": true, 00:18:49.676 "nvme_admin": false, 00:18:49.676 "nvme_io": false, 00:18:49.676 "nvme_io_md": false, 00:18:49.676 "write_zeroes": true, 00:18:49.677 "zcopy": true, 00:18:49.677 "get_zone_info": false, 00:18:49.677 "zone_management": false, 00:18:49.677 "zone_append": false, 00:18:49.677 "compare": false, 00:18:49.677 "compare_and_write": false, 00:18:49.677 "abort": true, 00:18:49.677 "seek_hole": false, 00:18:49.677 "seek_data": false, 00:18:49.677 "copy": true, 00:18:49.677 "nvme_iov_md": false 00:18:49.677 }, 00:18:49.677 "memory_domains": [ 00:18:49.677 { 00:18:49.677 "dma_device_id": "system", 00:18:49.677 "dma_device_type": 1 00:18:49.677 }, 00:18:49.677 { 00:18:49.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.677 "dma_device_type": 2 00:18:49.677 } 00:18:49.677 ], 00:18:49.677 "driver_specific": {} 00:18:49.677 } 00:18:49.677 ] 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.677 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.936 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.936 "name": "Existed_Raid", 00:18:49.936 "uuid": "50a7a0f0-309d-481a-a606-e6597822e975", 00:18:49.936 "strip_size_kb": 64, 00:18:49.936 "state": "online", 00:18:49.936 "raid_level": "raid0", 00:18:49.936 "superblock": false, 00:18:49.936 "num_base_bdevs": 3, 00:18:49.936 "num_base_bdevs_discovered": 3, 00:18:49.936 "num_base_bdevs_operational": 3, 00:18:49.936 "base_bdevs_list": [ 00:18:49.936 { 00:18:49.936 "name": "NewBaseBdev", 00:18:49.936 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:49.936 "is_configured": true, 00:18:49.936 "data_offset": 0, 00:18:49.936 "data_size": 65536 00:18:49.936 }, 00:18:49.936 { 00:18:49.936 "name": "BaseBdev2", 00:18:49.936 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:49.936 "is_configured": true, 00:18:49.936 "data_offset": 0, 00:18:49.936 "data_size": 65536 00:18:49.936 }, 00:18:49.936 { 00:18:49.936 "name": "BaseBdev3", 00:18:49.936 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:49.936 "is_configured": true, 00:18:49.936 "data_offset": 0, 00:18:49.936 "data_size": 65536 00:18:49.936 } 00:18:49.936 ] 00:18:49.936 }' 00:18:49.936 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.936 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:50.871 [2024-07-25 14:01:39.855982] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.871 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:50.871 "name": "Existed_Raid", 00:18:50.871 "aliases": [ 00:18:50.871 "50a7a0f0-309d-481a-a606-e6597822e975" 00:18:50.871 ], 00:18:50.871 "product_name": "Raid Volume", 00:18:50.871 "block_size": 512, 00:18:50.871 "num_blocks": 196608, 00:18:50.871 "uuid": "50a7a0f0-309d-481a-a606-e6597822e975", 00:18:50.871 "assigned_rate_limits": { 00:18:50.871 "rw_ios_per_sec": 0, 00:18:50.871 "rw_mbytes_per_sec": 0, 00:18:50.871 "r_mbytes_per_sec": 0, 00:18:50.871 "w_mbytes_per_sec": 0 00:18:50.871 }, 00:18:50.871 "claimed": false, 00:18:50.871 "zoned": false, 00:18:50.871 "supported_io_types": { 00:18:50.871 "read": true, 00:18:50.871 "write": true, 00:18:50.871 "unmap": true, 00:18:50.871 "flush": true, 00:18:50.871 "reset": true, 00:18:50.871 "nvme_admin": false, 00:18:50.871 "nvme_io": false, 00:18:50.871 "nvme_io_md": false, 00:18:50.871 "write_zeroes": true, 00:18:50.871 "zcopy": false, 00:18:50.871 "get_zone_info": false, 00:18:50.871 "zone_management": false, 00:18:50.871 "zone_append": false, 00:18:50.871 "compare": false, 00:18:50.871 "compare_and_write": false, 00:18:50.871 "abort": false, 00:18:50.871 "seek_hole": false, 00:18:50.871 "seek_data": false, 00:18:50.871 "copy": false, 00:18:50.871 "nvme_iov_md": false 00:18:50.871 }, 00:18:50.871 "memory_domains": [ 00:18:50.871 { 00:18:50.871 "dma_device_id": "system", 00:18:50.871 "dma_device_type": 1 00:18:50.871 }, 00:18:50.871 { 00:18:50.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.871 "dma_device_type": 2 00:18:50.871 }, 00:18:50.871 { 00:18:50.871 "dma_device_id": "system", 00:18:50.871 "dma_device_type": 1 00:18:50.871 }, 00:18:50.871 { 00:18:50.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.871 "dma_device_type": 2 00:18:50.871 }, 00:18:50.871 { 00:18:50.871 "dma_device_id": "system", 00:18:50.871 "dma_device_type": 1 00:18:50.871 }, 00:18:50.871 { 00:18:50.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.871 "dma_device_type": 2 00:18:50.871 } 00:18:50.871 ], 00:18:50.871 "driver_specific": { 00:18:50.871 "raid": { 00:18:50.871 "uuid": "50a7a0f0-309d-481a-a606-e6597822e975", 00:18:50.871 "strip_size_kb": 64, 00:18:50.871 "state": "online", 00:18:50.871 "raid_level": "raid0", 00:18:50.872 "superblock": false, 00:18:50.872 "num_base_bdevs": 3, 00:18:50.872 "num_base_bdevs_discovered": 3, 00:18:50.872 "num_base_bdevs_operational": 3, 00:18:50.872 "base_bdevs_list": [ 00:18:50.872 { 00:18:50.872 "name": "NewBaseBdev", 00:18:50.872 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:50.872 "is_configured": true, 00:18:50.872 "data_offset": 0, 00:18:50.872 "data_size": 65536 00:18:50.872 }, 00:18:50.872 { 00:18:50.872 "name": "BaseBdev2", 00:18:50.872 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:50.872 "is_configured": true, 00:18:50.872 "data_offset": 0, 00:18:50.872 "data_size": 65536 00:18:50.872 }, 00:18:50.872 { 00:18:50.872 "name": "BaseBdev3", 00:18:50.872 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:50.872 "is_configured": true, 00:18:50.872 "data_offset": 0, 00:18:50.872 "data_size": 65536 00:18:50.872 } 00:18:50.872 ] 00:18:50.872 } 00:18:50.872 } 00:18:50.872 }' 00:18:50.872 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:51.130 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:51.130 BaseBdev2 00:18:51.130 BaseBdev3' 00:18:51.130 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.130 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:51.130 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.388 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.388 "name": "NewBaseBdev", 00:18:51.388 "aliases": [ 00:18:51.388 "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2" 00:18:51.388 ], 00:18:51.388 "product_name": "Malloc disk", 00:18:51.388 "block_size": 512, 00:18:51.388 "num_blocks": 65536, 00:18:51.388 "uuid": "8cbfb7e1-4069-4a13-a4ff-11ccfc0b8ac2", 00:18:51.388 "assigned_rate_limits": { 00:18:51.388 "rw_ios_per_sec": 0, 00:18:51.388 "rw_mbytes_per_sec": 0, 00:18:51.388 "r_mbytes_per_sec": 0, 00:18:51.388 "w_mbytes_per_sec": 0 00:18:51.388 }, 00:18:51.388 "claimed": true, 00:18:51.388 "claim_type": "exclusive_write", 00:18:51.388 "zoned": false, 00:18:51.388 "supported_io_types": { 00:18:51.388 "read": true, 00:18:51.388 "write": true, 00:18:51.388 "unmap": true, 00:18:51.388 "flush": true, 00:18:51.388 "reset": true, 00:18:51.388 "nvme_admin": false, 00:18:51.388 "nvme_io": false, 00:18:51.388 "nvme_io_md": false, 00:18:51.388 "write_zeroes": true, 00:18:51.388 "zcopy": true, 00:18:51.388 "get_zone_info": false, 00:18:51.388 "zone_management": false, 00:18:51.388 "zone_append": false, 00:18:51.388 "compare": false, 00:18:51.388 "compare_and_write": false, 00:18:51.388 "abort": true, 00:18:51.389 "seek_hole": false, 00:18:51.389 "seek_data": false, 00:18:51.389 "copy": true, 00:18:51.389 "nvme_iov_md": false 00:18:51.389 }, 00:18:51.389 "memory_domains": [ 00:18:51.389 { 00:18:51.389 "dma_device_id": "system", 00:18:51.389 "dma_device_type": 1 00:18:51.389 }, 00:18:51.389 { 00:18:51.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.389 "dma_device_type": 2 00:18:51.389 } 00:18:51.389 ], 00:18:51.389 "driver_specific": {} 00:18:51.389 }' 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.389 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:51.661 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.939 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.939 "name": "BaseBdev2", 00:18:51.939 "aliases": [ 00:18:51.939 "0746db65-0e3c-478f-bb4b-701047313080" 00:18:51.939 ], 00:18:51.939 "product_name": "Malloc disk", 00:18:51.939 "block_size": 512, 00:18:51.939 "num_blocks": 65536, 00:18:51.939 "uuid": "0746db65-0e3c-478f-bb4b-701047313080", 00:18:51.939 "assigned_rate_limits": { 00:18:51.939 "rw_ios_per_sec": 0, 00:18:51.939 "rw_mbytes_per_sec": 0, 00:18:51.939 "r_mbytes_per_sec": 0, 00:18:51.939 "w_mbytes_per_sec": 0 00:18:51.939 }, 00:18:51.939 "claimed": true, 00:18:51.939 "claim_type": "exclusive_write", 00:18:51.939 "zoned": false, 00:18:51.939 "supported_io_types": { 00:18:51.939 "read": true, 00:18:51.939 "write": true, 00:18:51.939 "unmap": true, 00:18:51.939 "flush": true, 00:18:51.939 "reset": true, 00:18:51.939 "nvme_admin": false, 00:18:51.939 "nvme_io": false, 00:18:51.939 "nvme_io_md": false, 00:18:51.939 "write_zeroes": true, 00:18:51.939 "zcopy": true, 00:18:51.939 "get_zone_info": false, 00:18:51.940 "zone_management": false, 00:18:51.940 "zone_append": false, 00:18:51.940 "compare": false, 00:18:51.940 "compare_and_write": false, 00:18:51.940 "abort": true, 00:18:51.940 "seek_hole": false, 00:18:51.940 "seek_data": false, 00:18:51.940 "copy": true, 00:18:51.940 "nvme_iov_md": false 00:18:51.940 }, 00:18:51.940 "memory_domains": [ 00:18:51.940 { 00:18:51.940 "dma_device_id": "system", 00:18:51.940 "dma_device_type": 1 00:18:51.940 }, 00:18:51.940 { 00:18:51.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.940 "dma_device_type": 2 00:18:51.940 } 00:18:51.940 ], 00:18:51.940 "driver_specific": {} 00:18:51.940 }' 00:18:51.940 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.940 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.198 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.457 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.457 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.457 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.457 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:52.457 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:52.717 "name": "BaseBdev3", 00:18:52.717 "aliases": [ 00:18:52.717 "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3" 00:18:52.717 ], 00:18:52.717 "product_name": "Malloc disk", 00:18:52.717 "block_size": 512, 00:18:52.717 "num_blocks": 65536, 00:18:52.717 "uuid": "bb3f2483-a7c6-4afa-a452-1e5cb0c2dcc3", 00:18:52.717 "assigned_rate_limits": { 00:18:52.717 "rw_ios_per_sec": 0, 00:18:52.717 "rw_mbytes_per_sec": 0, 00:18:52.717 "r_mbytes_per_sec": 0, 00:18:52.717 "w_mbytes_per_sec": 0 00:18:52.717 }, 00:18:52.717 "claimed": true, 00:18:52.717 "claim_type": "exclusive_write", 00:18:52.717 "zoned": false, 00:18:52.717 "supported_io_types": { 00:18:52.717 "read": true, 00:18:52.717 "write": true, 00:18:52.717 "unmap": true, 00:18:52.717 "flush": true, 00:18:52.717 "reset": true, 00:18:52.717 "nvme_admin": false, 00:18:52.717 "nvme_io": false, 00:18:52.717 "nvme_io_md": false, 00:18:52.717 "write_zeroes": true, 00:18:52.717 "zcopy": true, 00:18:52.717 "get_zone_info": false, 00:18:52.717 "zone_management": false, 00:18:52.717 "zone_append": false, 00:18:52.717 "compare": false, 00:18:52.717 "compare_and_write": false, 00:18:52.717 "abort": true, 00:18:52.717 "seek_hole": false, 00:18:52.717 "seek_data": false, 00:18:52.717 "copy": true, 00:18:52.717 "nvme_iov_md": false 00:18:52.717 }, 00:18:52.717 "memory_domains": [ 00:18:52.717 { 00:18:52.717 "dma_device_id": "system", 00:18:52.717 "dma_device_type": 1 00:18:52.717 }, 00:18:52.717 { 00:18:52.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.717 "dma_device_type": 2 00:18:52.717 } 00:18:52.717 ], 00:18:52.717 "driver_specific": {} 00:18:52.717 }' 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.717 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:52.976 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:53.545 [2024-07-25 14:01:42.308333] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.545 [2024-07-25 14:01:42.308403] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.545 [2024-07-25 14:01:42.308490] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.545 [2024-07-25 14:01:42.308559] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.545 [2024-07-25 14:01:42.308572] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124957 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 124957 ']' 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 124957 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124957 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124957' 00:18:53.545 killing process with pid 124957 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 124957 00:18:53.545 [2024-07-25 14:01:42.357869] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.545 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 124957 00:18:53.804 [2024-07-25 14:01:42.611142] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.738 14:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:54.738 00:18:54.738 real 0m34.497s 00:18:54.738 user 1m4.292s 00:18:54.738 sys 0m3.836s 00:18:54.738 14:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:54.738 14:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.739 ************************************ 00:18:54.739 END TEST raid_state_function_test 00:18:54.739 ************************************ 00:18:54.997 14:01:43 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:18:54.997 14:01:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:54.997 14:01:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:54.997 14:01:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.997 ************************************ 00:18:54.997 START TEST raid_state_function_test_sb 00:18:54.997 ************************************ 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=125993 00:18:54.997 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 125993' 00:18:54.997 Process raid pid: 125993 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 125993 /var/tmp/spdk-raid.sock 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 125993 ']' 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.998 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.998 [2024-07-25 14:01:43.908649] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:18:54.998 [2024-07-25 14:01:43.908913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.256 [2024-07-25 14:01:44.080746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.514 [2024-07-25 14:01:44.307534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.514 [2024-07-25 14:01:44.518547] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.080 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.080 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:56.080 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:56.337 [2024-07-25 14:01:45.209021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.337 [2024-07-25 14:01:45.209162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.337 [2024-07-25 14:01:45.209179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.337 [2024-07-25 14:01:45.209211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.337 [2024-07-25 14:01:45.209221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.337 [2024-07-25 14:01:45.209239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.337 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.338 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.338 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.338 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.595 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.595 "name": "Existed_Raid", 00:18:56.595 "uuid": "87323bbf-d5a6-464c-a1ce-f14a44f6c224", 00:18:56.595 "strip_size_kb": 64, 00:18:56.595 "state": "configuring", 00:18:56.595 "raid_level": "raid0", 00:18:56.595 "superblock": true, 00:18:56.595 "num_base_bdevs": 3, 00:18:56.595 "num_base_bdevs_discovered": 0, 00:18:56.595 "num_base_bdevs_operational": 3, 00:18:56.595 "base_bdevs_list": [ 00:18:56.595 { 00:18:56.595 "name": "BaseBdev1", 00:18:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.595 "is_configured": false, 00:18:56.595 "data_offset": 0, 00:18:56.595 "data_size": 0 00:18:56.595 }, 00:18:56.595 { 00:18:56.595 "name": "BaseBdev2", 00:18:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.595 "is_configured": false, 00:18:56.595 "data_offset": 0, 00:18:56.595 "data_size": 0 00:18:56.595 }, 00:18:56.595 { 00:18:56.595 "name": "BaseBdev3", 00:18:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.595 "is_configured": false, 00:18:56.595 "data_offset": 0, 00:18:56.595 "data_size": 0 00:18:56.595 } 00:18:56.595 ] 00:18:56.595 }' 00:18:56.595 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.595 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.160 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:57.727 [2024-07-25 14:01:46.481137] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.727 [2024-07-25 14:01:46.481191] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:18:57.727 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:57.727 [2024-07-25 14:01:46.773219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.727 [2024-07-25 14:01:46.773312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.727 [2024-07-25 14:01:46.773342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.727 [2024-07-25 14:01:46.773364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.727 [2024-07-25 14:01:46.773372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.727 [2024-07-25 14:01:46.773414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.984 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:58.242 [2024-07-25 14:01:47.086733] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.242 BaseBdev1 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.242 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.499 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.757 [ 00:18:58.757 { 00:18:58.757 "name": "BaseBdev1", 00:18:58.757 "aliases": [ 00:18:58.757 "da0c862b-6d42-4312-b38e-6b81ce01cf52" 00:18:58.757 ], 00:18:58.757 "product_name": "Malloc disk", 00:18:58.757 "block_size": 512, 00:18:58.757 "num_blocks": 65536, 00:18:58.757 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:18:58.757 "assigned_rate_limits": { 00:18:58.757 "rw_ios_per_sec": 0, 00:18:58.757 "rw_mbytes_per_sec": 0, 00:18:58.757 "r_mbytes_per_sec": 0, 00:18:58.757 "w_mbytes_per_sec": 0 00:18:58.757 }, 00:18:58.757 "claimed": true, 00:18:58.757 "claim_type": "exclusive_write", 00:18:58.757 "zoned": false, 00:18:58.757 "supported_io_types": { 00:18:58.757 "read": true, 00:18:58.757 "write": true, 00:18:58.757 "unmap": true, 00:18:58.757 "flush": true, 00:18:58.757 "reset": true, 00:18:58.757 "nvme_admin": false, 00:18:58.757 "nvme_io": false, 00:18:58.757 "nvme_io_md": false, 00:18:58.757 "write_zeroes": true, 00:18:58.757 "zcopy": true, 00:18:58.757 "get_zone_info": false, 00:18:58.757 "zone_management": false, 00:18:58.757 "zone_append": false, 00:18:58.757 "compare": false, 00:18:58.757 "compare_and_write": false, 00:18:58.757 "abort": true, 00:18:58.757 "seek_hole": false, 00:18:58.757 "seek_data": false, 00:18:58.757 "copy": true, 00:18:58.757 "nvme_iov_md": false 00:18:58.757 }, 00:18:58.757 "memory_domains": [ 00:18:58.757 { 00:18:58.757 "dma_device_id": "system", 00:18:58.757 "dma_device_type": 1 00:18:58.757 }, 00:18:58.757 { 00:18:58.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.757 "dma_device_type": 2 00:18:58.757 } 00:18:58.757 ], 00:18:58.757 "driver_specific": {} 00:18:58.757 } 00:18:58.757 ] 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.757 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.016 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.016 "name": "Existed_Raid", 00:18:59.016 "uuid": "24723410-dda4-4172-a2d5-45a8ca850722", 00:18:59.016 "strip_size_kb": 64, 00:18:59.016 "state": "configuring", 00:18:59.016 "raid_level": "raid0", 00:18:59.016 "superblock": true, 00:18:59.016 "num_base_bdevs": 3, 00:18:59.016 "num_base_bdevs_discovered": 1, 00:18:59.016 "num_base_bdevs_operational": 3, 00:18:59.016 "base_bdevs_list": [ 00:18:59.016 { 00:18:59.016 "name": "BaseBdev1", 00:18:59.016 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:18:59.016 "is_configured": true, 00:18:59.016 "data_offset": 2048, 00:18:59.016 "data_size": 63488 00:18:59.016 }, 00:18:59.016 { 00:18:59.016 "name": "BaseBdev2", 00:18:59.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.016 "is_configured": false, 00:18:59.016 "data_offset": 0, 00:18:59.016 "data_size": 0 00:18:59.016 }, 00:18:59.016 { 00:18:59.016 "name": "BaseBdev3", 00:18:59.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.016 "is_configured": false, 00:18:59.016 "data_offset": 0, 00:18:59.016 "data_size": 0 00:18:59.016 } 00:18:59.016 ] 00:18:59.016 }' 00:18:59.016 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.016 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.583 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:00.147 [2024-07-25 14:01:48.891181] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.147 [2024-07-25 14:01:48.891254] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:19:00.147 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:00.147 [2024-07-25 14:01:49.175262] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.147 [2024-07-25 14:01:49.177841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.147 [2024-07-25 14:01:49.177950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.147 [2024-07-25 14:01:49.177966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.147 [2024-07-25 14:01:49.178010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.147 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:00.147 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.405 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.663 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.663 "name": "Existed_Raid", 00:19:00.663 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:00.663 "strip_size_kb": 64, 00:19:00.663 "state": "configuring", 00:19:00.663 "raid_level": "raid0", 00:19:00.663 "superblock": true, 00:19:00.663 "num_base_bdevs": 3, 00:19:00.663 "num_base_bdevs_discovered": 1, 00:19:00.663 "num_base_bdevs_operational": 3, 00:19:00.663 "base_bdevs_list": [ 00:19:00.663 { 00:19:00.663 "name": "BaseBdev1", 00:19:00.663 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:19:00.663 "is_configured": true, 00:19:00.663 "data_offset": 2048, 00:19:00.663 "data_size": 63488 00:19:00.663 }, 00:19:00.663 { 00:19:00.663 "name": "BaseBdev2", 00:19:00.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.663 "is_configured": false, 00:19:00.663 "data_offset": 0, 00:19:00.663 "data_size": 0 00:19:00.663 }, 00:19:00.663 { 00:19:00.663 "name": "BaseBdev3", 00:19:00.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.663 "is_configured": false, 00:19:00.663 "data_offset": 0, 00:19:00.663 "data_size": 0 00:19:00.663 } 00:19:00.663 ] 00:19:00.663 }' 00:19:00.663 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.663 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.229 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:01.487 [2024-07-25 14:01:50.519414] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.487 BaseBdev2 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:01.745 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.004 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:02.262 [ 00:19:02.262 { 00:19:02.262 "name": "BaseBdev2", 00:19:02.262 "aliases": [ 00:19:02.262 "93e0c1cf-9cb0-40a6-9fe1-eacd639828af" 00:19:02.262 ], 00:19:02.262 "product_name": "Malloc disk", 00:19:02.262 "block_size": 512, 00:19:02.262 "num_blocks": 65536, 00:19:02.262 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:02.262 "assigned_rate_limits": { 00:19:02.262 "rw_ios_per_sec": 0, 00:19:02.262 "rw_mbytes_per_sec": 0, 00:19:02.262 "r_mbytes_per_sec": 0, 00:19:02.262 "w_mbytes_per_sec": 0 00:19:02.262 }, 00:19:02.262 "claimed": true, 00:19:02.262 "claim_type": "exclusive_write", 00:19:02.263 "zoned": false, 00:19:02.263 "supported_io_types": { 00:19:02.263 "read": true, 00:19:02.263 "write": true, 00:19:02.263 "unmap": true, 00:19:02.263 "flush": true, 00:19:02.263 "reset": true, 00:19:02.263 "nvme_admin": false, 00:19:02.263 "nvme_io": false, 00:19:02.263 "nvme_io_md": false, 00:19:02.263 "write_zeroes": true, 00:19:02.263 "zcopy": true, 00:19:02.263 "get_zone_info": false, 00:19:02.263 "zone_management": false, 00:19:02.263 "zone_append": false, 00:19:02.263 "compare": false, 00:19:02.263 "compare_and_write": false, 00:19:02.263 "abort": true, 00:19:02.263 "seek_hole": false, 00:19:02.263 "seek_data": false, 00:19:02.263 "copy": true, 00:19:02.263 "nvme_iov_md": false 00:19:02.263 }, 00:19:02.263 "memory_domains": [ 00:19:02.263 { 00:19:02.263 "dma_device_id": "system", 00:19:02.263 "dma_device_type": 1 00:19:02.263 }, 00:19:02.263 { 00:19:02.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.263 "dma_device_type": 2 00:19:02.263 } 00:19:02.263 ], 00:19:02.263 "driver_specific": {} 00:19:02.263 } 00:19:02.263 ] 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.263 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.521 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.521 "name": "Existed_Raid", 00:19:02.521 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:02.521 "strip_size_kb": 64, 00:19:02.521 "state": "configuring", 00:19:02.521 "raid_level": "raid0", 00:19:02.521 "superblock": true, 00:19:02.521 "num_base_bdevs": 3, 00:19:02.521 "num_base_bdevs_discovered": 2, 00:19:02.521 "num_base_bdevs_operational": 3, 00:19:02.521 "base_bdevs_list": [ 00:19:02.521 { 00:19:02.521 "name": "BaseBdev1", 00:19:02.521 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:19:02.521 "is_configured": true, 00:19:02.521 "data_offset": 2048, 00:19:02.521 "data_size": 63488 00:19:02.521 }, 00:19:02.521 { 00:19:02.521 "name": "BaseBdev2", 00:19:02.521 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:02.521 "is_configured": true, 00:19:02.521 "data_offset": 2048, 00:19:02.521 "data_size": 63488 00:19:02.521 }, 00:19:02.521 { 00:19:02.521 "name": "BaseBdev3", 00:19:02.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.521 "is_configured": false, 00:19:02.521 "data_offset": 0, 00:19:02.521 "data_size": 0 00:19:02.521 } 00:19:02.521 ] 00:19:02.521 }' 00:19:02.521 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.521 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.085 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.652 [2024-07-25 14:01:52.436948] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.652 [2024-07-25 14:01:52.437231] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:19:03.652 [2024-07-25 14:01:52.437248] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:03.652 [2024-07-25 14:01:52.437394] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:03.652 [2024-07-25 14:01:52.437866] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:19:03.652 [2024-07-25 14:01:52.437893] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:19:03.652 [2024-07-25 14:01:52.438064] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.652 BaseBdev3 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:03.652 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.933 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:04.217 [ 00:19:04.217 { 00:19:04.217 "name": "BaseBdev3", 00:19:04.217 "aliases": [ 00:19:04.217 "4da6d895-9576-4a03-870d-eafe32c158f0" 00:19:04.217 ], 00:19:04.217 "product_name": "Malloc disk", 00:19:04.217 "block_size": 512, 00:19:04.217 "num_blocks": 65536, 00:19:04.217 "uuid": "4da6d895-9576-4a03-870d-eafe32c158f0", 00:19:04.217 "assigned_rate_limits": { 00:19:04.217 "rw_ios_per_sec": 0, 00:19:04.217 "rw_mbytes_per_sec": 0, 00:19:04.217 "r_mbytes_per_sec": 0, 00:19:04.217 "w_mbytes_per_sec": 0 00:19:04.217 }, 00:19:04.217 "claimed": true, 00:19:04.217 "claim_type": "exclusive_write", 00:19:04.217 "zoned": false, 00:19:04.217 "supported_io_types": { 00:19:04.217 "read": true, 00:19:04.217 "write": true, 00:19:04.217 "unmap": true, 00:19:04.217 "flush": true, 00:19:04.217 "reset": true, 00:19:04.217 "nvme_admin": false, 00:19:04.217 "nvme_io": false, 00:19:04.217 "nvme_io_md": false, 00:19:04.217 "write_zeroes": true, 00:19:04.217 "zcopy": true, 00:19:04.217 "get_zone_info": false, 00:19:04.217 "zone_management": false, 00:19:04.217 "zone_append": false, 00:19:04.217 "compare": false, 00:19:04.217 "compare_and_write": false, 00:19:04.217 "abort": true, 00:19:04.217 "seek_hole": false, 00:19:04.217 "seek_data": false, 00:19:04.217 "copy": true, 00:19:04.217 "nvme_iov_md": false 00:19:04.217 }, 00:19:04.217 "memory_domains": [ 00:19:04.217 { 00:19:04.217 "dma_device_id": "system", 00:19:04.217 "dma_device_type": 1 00:19:04.217 }, 00:19:04.217 { 00:19:04.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.217 "dma_device_type": 2 00:19:04.217 } 00:19:04.217 ], 00:19:04.217 "driver_specific": {} 00:19:04.217 } 00:19:04.217 ] 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.217 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.476 14:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.476 "name": "Existed_Raid", 00:19:04.476 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:04.476 "strip_size_kb": 64, 00:19:04.476 "state": "online", 00:19:04.476 "raid_level": "raid0", 00:19:04.476 "superblock": true, 00:19:04.476 "num_base_bdevs": 3, 00:19:04.476 "num_base_bdevs_discovered": 3, 00:19:04.476 "num_base_bdevs_operational": 3, 00:19:04.476 "base_bdevs_list": [ 00:19:04.476 { 00:19:04.476 "name": "BaseBdev1", 00:19:04.476 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:19:04.476 "is_configured": true, 00:19:04.476 "data_offset": 2048, 00:19:04.476 "data_size": 63488 00:19:04.476 }, 00:19:04.476 { 00:19:04.476 "name": "BaseBdev2", 00:19:04.476 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:04.476 "is_configured": true, 00:19:04.476 "data_offset": 2048, 00:19:04.476 "data_size": 63488 00:19:04.476 }, 00:19:04.476 { 00:19:04.476 "name": "BaseBdev3", 00:19:04.476 "uuid": "4da6d895-9576-4a03-870d-eafe32c158f0", 00:19:04.476 "is_configured": true, 00:19:04.476 "data_offset": 2048, 00:19:04.476 "data_size": 63488 00:19:04.476 } 00:19:04.476 ] 00:19:04.476 }' 00:19:04.476 14:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.476 14:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:05.040 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:05.298 [2024-07-25 14:01:54.282762] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.298 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:05.298 "name": "Existed_Raid", 00:19:05.298 "aliases": [ 00:19:05.298 "0ab872ab-6b27-49cb-9156-549db5bac719" 00:19:05.298 ], 00:19:05.298 "product_name": "Raid Volume", 00:19:05.298 "block_size": 512, 00:19:05.298 "num_blocks": 190464, 00:19:05.298 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:05.298 "assigned_rate_limits": { 00:19:05.298 "rw_ios_per_sec": 0, 00:19:05.298 "rw_mbytes_per_sec": 0, 00:19:05.298 "r_mbytes_per_sec": 0, 00:19:05.298 "w_mbytes_per_sec": 0 00:19:05.298 }, 00:19:05.298 "claimed": false, 00:19:05.298 "zoned": false, 00:19:05.298 "supported_io_types": { 00:19:05.298 "read": true, 00:19:05.298 "write": true, 00:19:05.298 "unmap": true, 00:19:05.298 "flush": true, 00:19:05.298 "reset": true, 00:19:05.298 "nvme_admin": false, 00:19:05.298 "nvme_io": false, 00:19:05.298 "nvme_io_md": false, 00:19:05.298 "write_zeroes": true, 00:19:05.298 "zcopy": false, 00:19:05.298 "get_zone_info": false, 00:19:05.298 "zone_management": false, 00:19:05.298 "zone_append": false, 00:19:05.298 "compare": false, 00:19:05.298 "compare_and_write": false, 00:19:05.298 "abort": false, 00:19:05.298 "seek_hole": false, 00:19:05.298 "seek_data": false, 00:19:05.298 "copy": false, 00:19:05.298 "nvme_iov_md": false 00:19:05.298 }, 00:19:05.298 "memory_domains": [ 00:19:05.298 { 00:19:05.298 "dma_device_id": "system", 00:19:05.298 "dma_device_type": 1 00:19:05.298 }, 00:19:05.298 { 00:19:05.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.299 "dma_device_type": 2 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "dma_device_id": "system", 00:19:05.299 "dma_device_type": 1 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.299 "dma_device_type": 2 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "dma_device_id": "system", 00:19:05.299 "dma_device_type": 1 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.299 "dma_device_type": 2 00:19:05.299 } 00:19:05.299 ], 00:19:05.299 "driver_specific": { 00:19:05.299 "raid": { 00:19:05.299 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:05.299 "strip_size_kb": 64, 00:19:05.299 "state": "online", 00:19:05.299 "raid_level": "raid0", 00:19:05.299 "superblock": true, 00:19:05.299 "num_base_bdevs": 3, 00:19:05.299 "num_base_bdevs_discovered": 3, 00:19:05.299 "num_base_bdevs_operational": 3, 00:19:05.299 "base_bdevs_list": [ 00:19:05.299 { 00:19:05.299 "name": "BaseBdev1", 00:19:05.299 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:19:05.299 "is_configured": true, 00:19:05.299 "data_offset": 2048, 00:19:05.299 "data_size": 63488 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "name": "BaseBdev2", 00:19:05.299 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:05.299 "is_configured": true, 00:19:05.299 "data_offset": 2048, 00:19:05.299 "data_size": 63488 00:19:05.299 }, 00:19:05.299 { 00:19:05.299 "name": "BaseBdev3", 00:19:05.299 "uuid": "4da6d895-9576-4a03-870d-eafe32c158f0", 00:19:05.299 "is_configured": true, 00:19:05.299 "data_offset": 2048, 00:19:05.299 "data_size": 63488 00:19:05.299 } 00:19:05.299 ] 00:19:05.299 } 00:19:05.299 } 00:19:05.299 }' 00:19:05.299 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.557 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:05.557 BaseBdev2 00:19:05.557 BaseBdev3' 00:19:05.557 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.557 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:05.557 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.557 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.557 "name": "BaseBdev1", 00:19:05.557 "aliases": [ 00:19:05.557 "da0c862b-6d42-4312-b38e-6b81ce01cf52" 00:19:05.557 ], 00:19:05.557 "product_name": "Malloc disk", 00:19:05.557 "block_size": 512, 00:19:05.557 "num_blocks": 65536, 00:19:05.557 "uuid": "da0c862b-6d42-4312-b38e-6b81ce01cf52", 00:19:05.557 "assigned_rate_limits": { 00:19:05.557 "rw_ios_per_sec": 0, 00:19:05.557 "rw_mbytes_per_sec": 0, 00:19:05.557 "r_mbytes_per_sec": 0, 00:19:05.557 "w_mbytes_per_sec": 0 00:19:05.557 }, 00:19:05.557 "claimed": true, 00:19:05.557 "claim_type": "exclusive_write", 00:19:05.557 "zoned": false, 00:19:05.557 "supported_io_types": { 00:19:05.557 "read": true, 00:19:05.557 "write": true, 00:19:05.557 "unmap": true, 00:19:05.557 "flush": true, 00:19:05.557 "reset": true, 00:19:05.557 "nvme_admin": false, 00:19:05.557 "nvme_io": false, 00:19:05.557 "nvme_io_md": false, 00:19:05.557 "write_zeroes": true, 00:19:05.557 "zcopy": true, 00:19:05.557 "get_zone_info": false, 00:19:05.557 "zone_management": false, 00:19:05.557 "zone_append": false, 00:19:05.557 "compare": false, 00:19:05.557 "compare_and_write": false, 00:19:05.557 "abort": true, 00:19:05.557 "seek_hole": false, 00:19:05.557 "seek_data": false, 00:19:05.557 "copy": true, 00:19:05.557 "nvme_iov_md": false 00:19:05.557 }, 00:19:05.557 "memory_domains": [ 00:19:05.557 { 00:19:05.557 "dma_device_id": "system", 00:19:05.557 "dma_device_type": 1 00:19:05.557 }, 00:19:05.557 { 00:19:05.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.557 "dma_device_type": 2 00:19:05.557 } 00:19:05.557 ], 00:19:05.557 "driver_specific": {} 00:19:05.557 }' 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.815 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.073 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.073 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.073 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.073 14:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.073 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.073 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.073 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:06.073 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.331 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.331 "name": "BaseBdev2", 00:19:06.331 "aliases": [ 00:19:06.331 "93e0c1cf-9cb0-40a6-9fe1-eacd639828af" 00:19:06.331 ], 00:19:06.331 "product_name": "Malloc disk", 00:19:06.331 "block_size": 512, 00:19:06.331 "num_blocks": 65536, 00:19:06.331 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:06.331 "assigned_rate_limits": { 00:19:06.331 "rw_ios_per_sec": 0, 00:19:06.331 "rw_mbytes_per_sec": 0, 00:19:06.331 "r_mbytes_per_sec": 0, 00:19:06.331 "w_mbytes_per_sec": 0 00:19:06.331 }, 00:19:06.331 "claimed": true, 00:19:06.331 "claim_type": "exclusive_write", 00:19:06.331 "zoned": false, 00:19:06.331 "supported_io_types": { 00:19:06.331 "read": true, 00:19:06.331 "write": true, 00:19:06.331 "unmap": true, 00:19:06.331 "flush": true, 00:19:06.331 "reset": true, 00:19:06.331 "nvme_admin": false, 00:19:06.331 "nvme_io": false, 00:19:06.331 "nvme_io_md": false, 00:19:06.331 "write_zeroes": true, 00:19:06.331 "zcopy": true, 00:19:06.331 "get_zone_info": false, 00:19:06.331 "zone_management": false, 00:19:06.331 "zone_append": false, 00:19:06.331 "compare": false, 00:19:06.331 "compare_and_write": false, 00:19:06.331 "abort": true, 00:19:06.331 "seek_hole": false, 00:19:06.331 "seek_data": false, 00:19:06.331 "copy": true, 00:19:06.331 "nvme_iov_md": false 00:19:06.331 }, 00:19:06.331 "memory_domains": [ 00:19:06.331 { 00:19:06.331 "dma_device_id": "system", 00:19:06.331 "dma_device_type": 1 00:19:06.331 }, 00:19:06.331 { 00:19:06.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.331 "dma_device_type": 2 00:19:06.331 } 00:19:06.331 ], 00:19:06.331 "driver_specific": {} 00:19:06.331 }' 00:19:06.331 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.331 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.331 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.331 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.590 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.848 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.848 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.848 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.848 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:06.848 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.106 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.106 "name": "BaseBdev3", 00:19:07.106 "aliases": [ 00:19:07.106 "4da6d895-9576-4a03-870d-eafe32c158f0" 00:19:07.106 ], 00:19:07.106 "product_name": "Malloc disk", 00:19:07.106 "block_size": 512, 00:19:07.106 "num_blocks": 65536, 00:19:07.106 "uuid": "4da6d895-9576-4a03-870d-eafe32c158f0", 00:19:07.106 "assigned_rate_limits": { 00:19:07.106 "rw_ios_per_sec": 0, 00:19:07.106 "rw_mbytes_per_sec": 0, 00:19:07.106 "r_mbytes_per_sec": 0, 00:19:07.106 "w_mbytes_per_sec": 0 00:19:07.106 }, 00:19:07.106 "claimed": true, 00:19:07.106 "claim_type": "exclusive_write", 00:19:07.106 "zoned": false, 00:19:07.106 "supported_io_types": { 00:19:07.106 "read": true, 00:19:07.106 "write": true, 00:19:07.106 "unmap": true, 00:19:07.106 "flush": true, 00:19:07.106 "reset": true, 00:19:07.106 "nvme_admin": false, 00:19:07.106 "nvme_io": false, 00:19:07.106 "nvme_io_md": false, 00:19:07.106 "write_zeroes": true, 00:19:07.106 "zcopy": true, 00:19:07.106 "get_zone_info": false, 00:19:07.106 "zone_management": false, 00:19:07.106 "zone_append": false, 00:19:07.106 "compare": false, 00:19:07.106 "compare_and_write": false, 00:19:07.106 "abort": true, 00:19:07.106 "seek_hole": false, 00:19:07.106 "seek_data": false, 00:19:07.106 "copy": true, 00:19:07.106 "nvme_iov_md": false 00:19:07.106 }, 00:19:07.106 "memory_domains": [ 00:19:07.106 { 00:19:07.106 "dma_device_id": "system", 00:19:07.106 "dma_device_type": 1 00:19:07.106 }, 00:19:07.106 { 00:19:07.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.106 "dma_device_type": 2 00:19:07.106 } 00:19:07.106 ], 00:19:07.106 "driver_specific": {} 00:19:07.106 }' 00:19:07.106 14:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.106 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.106 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.106 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.106 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.364 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:07.939 [2024-07-25 14:01:56.690986] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.939 [2024-07-25 14:01:56.691076] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.939 [2024-07-25 14:01:56.691145] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.939 14:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.203 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.203 "name": "Existed_Raid", 00:19:08.203 "uuid": "0ab872ab-6b27-49cb-9156-549db5bac719", 00:19:08.203 "strip_size_kb": 64, 00:19:08.203 "state": "offline", 00:19:08.203 "raid_level": "raid0", 00:19:08.203 "superblock": true, 00:19:08.203 "num_base_bdevs": 3, 00:19:08.203 "num_base_bdevs_discovered": 2, 00:19:08.203 "num_base_bdevs_operational": 2, 00:19:08.203 "base_bdevs_list": [ 00:19:08.203 { 00:19:08.203 "name": null, 00:19:08.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.203 "is_configured": false, 00:19:08.203 "data_offset": 2048, 00:19:08.203 "data_size": 63488 00:19:08.203 }, 00:19:08.203 { 00:19:08.203 "name": "BaseBdev2", 00:19:08.203 "uuid": "93e0c1cf-9cb0-40a6-9fe1-eacd639828af", 00:19:08.203 "is_configured": true, 00:19:08.203 "data_offset": 2048, 00:19:08.203 "data_size": 63488 00:19:08.203 }, 00:19:08.203 { 00:19:08.203 "name": "BaseBdev3", 00:19:08.203 "uuid": "4da6d895-9576-4a03-870d-eafe32c158f0", 00:19:08.203 "is_configured": true, 00:19:08.203 "data_offset": 2048, 00:19:08.203 "data_size": 63488 00:19:08.203 } 00:19:08.203 ] 00:19:08.203 }' 00:19:08.203 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.203 14:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.138 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:09.138 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:09.138 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.139 14:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:09.139 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:09.139 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.139 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:09.396 [2024-07-25 14:01:58.353057] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.654 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:09.654 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:09.654 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:09.654 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.912 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:09.913 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.913 14:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:10.171 [2024-07-25 14:01:59.002455] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.171 [2024-07-25 14:01:59.002536] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:19:10.171 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.171 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.171 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.171 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:10.429 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:10.687 BaseBdev2 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:10.687 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.945 14:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:11.203 [ 00:19:11.203 { 00:19:11.203 "name": "BaseBdev2", 00:19:11.203 "aliases": [ 00:19:11.203 "dc6e506e-c045-4520-bb15-da5b0893b5a7" 00:19:11.203 ], 00:19:11.203 "product_name": "Malloc disk", 00:19:11.203 "block_size": 512, 00:19:11.203 "num_blocks": 65536, 00:19:11.203 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:11.203 "assigned_rate_limits": { 00:19:11.203 "rw_ios_per_sec": 0, 00:19:11.203 "rw_mbytes_per_sec": 0, 00:19:11.203 "r_mbytes_per_sec": 0, 00:19:11.203 "w_mbytes_per_sec": 0 00:19:11.203 }, 00:19:11.203 "claimed": false, 00:19:11.203 "zoned": false, 00:19:11.203 "supported_io_types": { 00:19:11.203 "read": true, 00:19:11.203 "write": true, 00:19:11.203 "unmap": true, 00:19:11.203 "flush": true, 00:19:11.203 "reset": true, 00:19:11.203 "nvme_admin": false, 00:19:11.203 "nvme_io": false, 00:19:11.203 "nvme_io_md": false, 00:19:11.203 "write_zeroes": true, 00:19:11.203 "zcopy": true, 00:19:11.203 "get_zone_info": false, 00:19:11.203 "zone_management": false, 00:19:11.203 "zone_append": false, 00:19:11.203 "compare": false, 00:19:11.203 "compare_and_write": false, 00:19:11.203 "abort": true, 00:19:11.203 "seek_hole": false, 00:19:11.203 "seek_data": false, 00:19:11.203 "copy": true, 00:19:11.203 "nvme_iov_md": false 00:19:11.203 }, 00:19:11.203 "memory_domains": [ 00:19:11.203 { 00:19:11.203 "dma_device_id": "system", 00:19:11.203 "dma_device_type": 1 00:19:11.203 }, 00:19:11.203 { 00:19:11.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.203 "dma_device_type": 2 00:19:11.203 } 00:19:11.203 ], 00:19:11.204 "driver_specific": {} 00:19:11.204 } 00:19:11.204 ] 00:19:11.204 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:11.204 14:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:11.204 14:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:11.204 14:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:11.462 BaseBdev3 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:11.462 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.720 14:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:12.286 [ 00:19:12.286 { 00:19:12.286 "name": "BaseBdev3", 00:19:12.286 "aliases": [ 00:19:12.286 "15f61e3f-36a4-4b8e-979e-e3fcc330872b" 00:19:12.286 ], 00:19:12.286 "product_name": "Malloc disk", 00:19:12.286 "block_size": 512, 00:19:12.286 "num_blocks": 65536, 00:19:12.286 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:12.286 "assigned_rate_limits": { 00:19:12.286 "rw_ios_per_sec": 0, 00:19:12.286 "rw_mbytes_per_sec": 0, 00:19:12.286 "r_mbytes_per_sec": 0, 00:19:12.286 "w_mbytes_per_sec": 0 00:19:12.286 }, 00:19:12.286 "claimed": false, 00:19:12.286 "zoned": false, 00:19:12.286 "supported_io_types": { 00:19:12.286 "read": true, 00:19:12.286 "write": true, 00:19:12.286 "unmap": true, 00:19:12.286 "flush": true, 00:19:12.286 "reset": true, 00:19:12.286 "nvme_admin": false, 00:19:12.286 "nvme_io": false, 00:19:12.286 "nvme_io_md": false, 00:19:12.286 "write_zeroes": true, 00:19:12.286 "zcopy": true, 00:19:12.286 "get_zone_info": false, 00:19:12.286 "zone_management": false, 00:19:12.286 "zone_append": false, 00:19:12.286 "compare": false, 00:19:12.286 "compare_and_write": false, 00:19:12.286 "abort": true, 00:19:12.286 "seek_hole": false, 00:19:12.286 "seek_data": false, 00:19:12.286 "copy": true, 00:19:12.286 "nvme_iov_md": false 00:19:12.286 }, 00:19:12.286 "memory_domains": [ 00:19:12.286 { 00:19:12.286 "dma_device_id": "system", 00:19:12.286 "dma_device_type": 1 00:19:12.286 }, 00:19:12.286 { 00:19:12.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.286 "dma_device_type": 2 00:19:12.286 } 00:19:12.286 ], 00:19:12.286 "driver_specific": {} 00:19:12.286 } 00:19:12.286 ] 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:12.286 [2024-07-25 14:02:01.288536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.286 [2024-07-25 14:02:01.289147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.286 [2024-07-25 14:02:01.289234] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.286 [2024-07-25 14:02:01.291450] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.286 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.852 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.852 "name": "Existed_Raid", 00:19:12.852 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:12.852 "strip_size_kb": 64, 00:19:12.852 "state": "configuring", 00:19:12.852 "raid_level": "raid0", 00:19:12.852 "superblock": true, 00:19:12.852 "num_base_bdevs": 3, 00:19:12.852 "num_base_bdevs_discovered": 2, 00:19:12.852 "num_base_bdevs_operational": 3, 00:19:12.852 "base_bdevs_list": [ 00:19:12.852 { 00:19:12.852 "name": "BaseBdev1", 00:19:12.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.852 "is_configured": false, 00:19:12.852 "data_offset": 0, 00:19:12.852 "data_size": 0 00:19:12.852 }, 00:19:12.852 { 00:19:12.852 "name": "BaseBdev2", 00:19:12.852 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:12.852 "is_configured": true, 00:19:12.852 "data_offset": 2048, 00:19:12.852 "data_size": 63488 00:19:12.852 }, 00:19:12.852 { 00:19:12.852 "name": "BaseBdev3", 00:19:12.852 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:12.852 "is_configured": true, 00:19:12.852 "data_offset": 2048, 00:19:12.852 "data_size": 63488 00:19:12.852 } 00:19:12.852 ] 00:19:12.852 }' 00:19:12.852 14:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.852 14:02:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.418 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:13.676 [2024-07-25 14:02:02.644771] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.676 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.935 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.935 "name": "Existed_Raid", 00:19:13.935 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:13.935 "strip_size_kb": 64, 00:19:13.935 "state": "configuring", 00:19:13.935 "raid_level": "raid0", 00:19:13.935 "superblock": true, 00:19:13.935 "num_base_bdevs": 3, 00:19:13.935 "num_base_bdevs_discovered": 1, 00:19:13.935 "num_base_bdevs_operational": 3, 00:19:13.935 "base_bdevs_list": [ 00:19:13.935 { 00:19:13.935 "name": "BaseBdev1", 00:19:13.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.935 "is_configured": false, 00:19:13.935 "data_offset": 0, 00:19:13.935 "data_size": 0 00:19:13.935 }, 00:19:13.935 { 00:19:13.935 "name": null, 00:19:13.935 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:13.935 "is_configured": false, 00:19:13.935 "data_offset": 2048, 00:19:13.935 "data_size": 63488 00:19:13.935 }, 00:19:13.935 { 00:19:13.935 "name": "BaseBdev3", 00:19:13.935 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:13.935 "is_configured": true, 00:19:13.935 "data_offset": 2048, 00:19:13.935 "data_size": 63488 00:19:13.935 } 00:19:13.935 ] 00:19:13.935 }' 00:19:13.935 14:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.935 14:02:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.880 14:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.880 14:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.172 14:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:15.172 14:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:15.443 [2024-07-25 14:02:04.340323] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.443 BaseBdev1 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:15.443 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.702 14:02:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.268 [ 00:19:16.268 { 00:19:16.268 "name": "BaseBdev1", 00:19:16.268 "aliases": [ 00:19:16.268 "852c3456-1c34-46a1-9a58-de1661eee406" 00:19:16.268 ], 00:19:16.268 "product_name": "Malloc disk", 00:19:16.268 "block_size": 512, 00:19:16.268 "num_blocks": 65536, 00:19:16.268 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:16.268 "assigned_rate_limits": { 00:19:16.268 "rw_ios_per_sec": 0, 00:19:16.268 "rw_mbytes_per_sec": 0, 00:19:16.268 "r_mbytes_per_sec": 0, 00:19:16.268 "w_mbytes_per_sec": 0 00:19:16.268 }, 00:19:16.268 "claimed": true, 00:19:16.268 "claim_type": "exclusive_write", 00:19:16.268 "zoned": false, 00:19:16.268 "supported_io_types": { 00:19:16.268 "read": true, 00:19:16.268 "write": true, 00:19:16.268 "unmap": true, 00:19:16.268 "flush": true, 00:19:16.268 "reset": true, 00:19:16.268 "nvme_admin": false, 00:19:16.268 "nvme_io": false, 00:19:16.268 "nvme_io_md": false, 00:19:16.268 "write_zeroes": true, 00:19:16.268 "zcopy": true, 00:19:16.268 "get_zone_info": false, 00:19:16.268 "zone_management": false, 00:19:16.268 "zone_append": false, 00:19:16.268 "compare": false, 00:19:16.268 "compare_and_write": false, 00:19:16.268 "abort": true, 00:19:16.268 "seek_hole": false, 00:19:16.268 "seek_data": false, 00:19:16.268 "copy": true, 00:19:16.268 "nvme_iov_md": false 00:19:16.268 }, 00:19:16.268 "memory_domains": [ 00:19:16.268 { 00:19:16.268 "dma_device_id": "system", 00:19:16.268 "dma_device_type": 1 00:19:16.268 }, 00:19:16.268 { 00:19:16.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.268 "dma_device_type": 2 00:19:16.268 } 00:19:16.268 ], 00:19:16.268 "driver_specific": {} 00:19:16.268 } 00:19:16.269 ] 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.269 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.527 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.527 "name": "Existed_Raid", 00:19:16.527 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:16.527 "strip_size_kb": 64, 00:19:16.527 "state": "configuring", 00:19:16.527 "raid_level": "raid0", 00:19:16.527 "superblock": true, 00:19:16.527 "num_base_bdevs": 3, 00:19:16.527 "num_base_bdevs_discovered": 2, 00:19:16.527 "num_base_bdevs_operational": 3, 00:19:16.527 "base_bdevs_list": [ 00:19:16.527 { 00:19:16.527 "name": "BaseBdev1", 00:19:16.527 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:16.527 "is_configured": true, 00:19:16.527 "data_offset": 2048, 00:19:16.527 "data_size": 63488 00:19:16.527 }, 00:19:16.527 { 00:19:16.528 "name": null, 00:19:16.528 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:16.528 "is_configured": false, 00:19:16.528 "data_offset": 2048, 00:19:16.528 "data_size": 63488 00:19:16.528 }, 00:19:16.528 { 00:19:16.528 "name": "BaseBdev3", 00:19:16.528 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:16.528 "is_configured": true, 00:19:16.528 "data_offset": 2048, 00:19:16.528 "data_size": 63488 00:19:16.528 } 00:19:16.528 ] 00:19:16.528 }' 00:19:16.528 14:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.528 14:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.095 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.095 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:17.354 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:17.354 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:17.612 [2024-07-25 14:02:06.622378] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.612 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.871 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.871 "name": "Existed_Raid", 00:19:17.871 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:17.871 "strip_size_kb": 64, 00:19:17.871 "state": "configuring", 00:19:17.871 "raid_level": "raid0", 00:19:17.871 "superblock": true, 00:19:17.871 "num_base_bdevs": 3, 00:19:17.871 "num_base_bdevs_discovered": 1, 00:19:17.871 "num_base_bdevs_operational": 3, 00:19:17.871 "base_bdevs_list": [ 00:19:17.871 { 00:19:17.871 "name": "BaseBdev1", 00:19:17.871 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:17.871 "is_configured": true, 00:19:17.871 "data_offset": 2048, 00:19:17.871 "data_size": 63488 00:19:17.871 }, 00:19:17.871 { 00:19:17.871 "name": null, 00:19:17.871 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:17.871 "is_configured": false, 00:19:17.871 "data_offset": 2048, 00:19:17.871 "data_size": 63488 00:19:17.871 }, 00:19:17.871 { 00:19:17.871 "name": null, 00:19:17.871 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:17.871 "is_configured": false, 00:19:17.871 "data_offset": 2048, 00:19:17.871 "data_size": 63488 00:19:17.871 } 00:19:17.871 ] 00:19:17.871 }' 00:19:17.871 14:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.871 14:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.847 14:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.847 14:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:18.847 14:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:18.847 14:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:19.106 [2024-07-25 14:02:08.118642] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.106 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.673 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.673 "name": "Existed_Raid", 00:19:19.673 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:19.673 "strip_size_kb": 64, 00:19:19.673 "state": "configuring", 00:19:19.673 "raid_level": "raid0", 00:19:19.673 "superblock": true, 00:19:19.673 "num_base_bdevs": 3, 00:19:19.673 "num_base_bdevs_discovered": 2, 00:19:19.673 "num_base_bdevs_operational": 3, 00:19:19.673 "base_bdevs_list": [ 00:19:19.673 { 00:19:19.673 "name": "BaseBdev1", 00:19:19.673 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 }, 00:19:19.673 { 00:19:19.673 "name": null, 00:19:19.673 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:19.673 "is_configured": false, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 }, 00:19:19.673 { 00:19:19.673 "name": "BaseBdev3", 00:19:19.673 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 } 00:19:19.673 ] 00:19:19.673 }' 00:19:19.673 14:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.673 14:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.239 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.239 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:20.497 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:20.497 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:20.755 [2024-07-25 14:02:09.662958] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.755 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.756 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.756 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.756 14:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.322 14:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.322 "name": "Existed_Raid", 00:19:21.322 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:21.322 "strip_size_kb": 64, 00:19:21.322 "state": "configuring", 00:19:21.322 "raid_level": "raid0", 00:19:21.322 "superblock": true, 00:19:21.322 "num_base_bdevs": 3, 00:19:21.322 "num_base_bdevs_discovered": 1, 00:19:21.322 "num_base_bdevs_operational": 3, 00:19:21.322 "base_bdevs_list": [ 00:19:21.322 { 00:19:21.322 "name": null, 00:19:21.323 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:21.323 "is_configured": false, 00:19:21.323 "data_offset": 2048, 00:19:21.323 "data_size": 63488 00:19:21.323 }, 00:19:21.323 { 00:19:21.323 "name": null, 00:19:21.323 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:21.323 "is_configured": false, 00:19:21.323 "data_offset": 2048, 00:19:21.323 "data_size": 63488 00:19:21.323 }, 00:19:21.323 { 00:19:21.323 "name": "BaseBdev3", 00:19:21.323 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:21.323 "is_configured": true, 00:19:21.323 "data_offset": 2048, 00:19:21.323 "data_size": 63488 00:19:21.323 } 00:19:21.323 ] 00:19:21.323 }' 00:19:21.323 14:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.323 14:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.889 14:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.889 14:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:22.148 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:22.148 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:22.406 [2024-07-25 14:02:11.278930] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.406 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.664 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.664 "name": "Existed_Raid", 00:19:22.664 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:22.664 "strip_size_kb": 64, 00:19:22.664 "state": "configuring", 00:19:22.664 "raid_level": "raid0", 00:19:22.664 "superblock": true, 00:19:22.664 "num_base_bdevs": 3, 00:19:22.664 "num_base_bdevs_discovered": 2, 00:19:22.664 "num_base_bdevs_operational": 3, 00:19:22.664 "base_bdevs_list": [ 00:19:22.664 { 00:19:22.664 "name": null, 00:19:22.664 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:22.664 "is_configured": false, 00:19:22.664 "data_offset": 2048, 00:19:22.664 "data_size": 63488 00:19:22.664 }, 00:19:22.664 { 00:19:22.664 "name": "BaseBdev2", 00:19:22.664 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:22.664 "is_configured": true, 00:19:22.664 "data_offset": 2048, 00:19:22.664 "data_size": 63488 00:19:22.664 }, 00:19:22.664 { 00:19:22.664 "name": "BaseBdev3", 00:19:22.664 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:22.664 "is_configured": true, 00:19:22.664 "data_offset": 2048, 00:19:22.664 "data_size": 63488 00:19:22.664 } 00:19:22.664 ] 00:19:22.664 }' 00:19:22.664 14:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.664 14:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.231 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.231 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:23.489 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:23.489 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:23.489 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.747 14:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 852c3456-1c34-46a1-9a58-de1661eee406 00:19:24.312 [2024-07-25 14:02:13.066665] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:24.312 [2024-07-25 14:02:13.066934] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:19:24.312 [2024-07-25 14:02:13.066950] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:24.312 [2024-07-25 14:02:13.067078] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:24.312 [2024-07-25 14:02:13.067494] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:19:24.312 [2024-07-25 14:02:13.067522] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:19:24.312 [2024-07-25 14:02:13.067686] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.312 NewBaseBdev 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:24.312 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.578 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:24.578 [ 00:19:24.578 { 00:19:24.578 "name": "NewBaseBdev", 00:19:24.578 "aliases": [ 00:19:24.578 "852c3456-1c34-46a1-9a58-de1661eee406" 00:19:24.578 ], 00:19:24.578 "product_name": "Malloc disk", 00:19:24.578 "block_size": 512, 00:19:24.578 "num_blocks": 65536, 00:19:24.578 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:24.578 "assigned_rate_limits": { 00:19:24.578 "rw_ios_per_sec": 0, 00:19:24.578 "rw_mbytes_per_sec": 0, 00:19:24.578 "r_mbytes_per_sec": 0, 00:19:24.578 "w_mbytes_per_sec": 0 00:19:24.578 }, 00:19:24.578 "claimed": true, 00:19:24.578 "claim_type": "exclusive_write", 00:19:24.578 "zoned": false, 00:19:24.578 "supported_io_types": { 00:19:24.578 "read": true, 00:19:24.578 "write": true, 00:19:24.578 "unmap": true, 00:19:24.578 "flush": true, 00:19:24.578 "reset": true, 00:19:24.578 "nvme_admin": false, 00:19:24.578 "nvme_io": false, 00:19:24.578 "nvme_io_md": false, 00:19:24.578 "write_zeroes": true, 00:19:24.578 "zcopy": true, 00:19:24.578 "get_zone_info": false, 00:19:24.578 "zone_management": false, 00:19:24.578 "zone_append": false, 00:19:24.578 "compare": false, 00:19:24.578 "compare_and_write": false, 00:19:24.578 "abort": true, 00:19:24.578 "seek_hole": false, 00:19:24.578 "seek_data": false, 00:19:24.578 "copy": true, 00:19:24.578 "nvme_iov_md": false 00:19:24.578 }, 00:19:24.578 "memory_domains": [ 00:19:24.578 { 00:19:24.578 "dma_device_id": "system", 00:19:24.578 "dma_device_type": 1 00:19:24.578 }, 00:19:24.578 { 00:19:24.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.578 "dma_device_type": 2 00:19:24.578 } 00:19:24.578 ], 00:19:24.578 "driver_specific": {} 00:19:24.578 } 00:19:24.578 ] 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.843 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.101 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.101 "name": "Existed_Raid", 00:19:25.101 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:25.101 "strip_size_kb": 64, 00:19:25.101 "state": "online", 00:19:25.101 "raid_level": "raid0", 00:19:25.101 "superblock": true, 00:19:25.101 "num_base_bdevs": 3, 00:19:25.101 "num_base_bdevs_discovered": 3, 00:19:25.101 "num_base_bdevs_operational": 3, 00:19:25.101 "base_bdevs_list": [ 00:19:25.101 { 00:19:25.101 "name": "NewBaseBdev", 00:19:25.101 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:25.101 "is_configured": true, 00:19:25.101 "data_offset": 2048, 00:19:25.101 "data_size": 63488 00:19:25.101 }, 00:19:25.101 { 00:19:25.101 "name": "BaseBdev2", 00:19:25.101 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:25.101 "is_configured": true, 00:19:25.101 "data_offset": 2048, 00:19:25.101 "data_size": 63488 00:19:25.101 }, 00:19:25.101 { 00:19:25.101 "name": "BaseBdev3", 00:19:25.101 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:25.101 "is_configured": true, 00:19:25.101 "data_offset": 2048, 00:19:25.101 "data_size": 63488 00:19:25.101 } 00:19:25.101 ] 00:19:25.101 }' 00:19:25.101 14:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.101 14:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:25.668 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:25.927 [2024-07-25 14:02:14.815418] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:25.927 "name": "Existed_Raid", 00:19:25.927 "aliases": [ 00:19:25.927 "efc7ccd4-8b9b-4773-9ce3-8e33d223d656" 00:19:25.927 ], 00:19:25.927 "product_name": "Raid Volume", 00:19:25.927 "block_size": 512, 00:19:25.927 "num_blocks": 190464, 00:19:25.927 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:25.927 "assigned_rate_limits": { 00:19:25.927 "rw_ios_per_sec": 0, 00:19:25.927 "rw_mbytes_per_sec": 0, 00:19:25.927 "r_mbytes_per_sec": 0, 00:19:25.927 "w_mbytes_per_sec": 0 00:19:25.927 }, 00:19:25.927 "claimed": false, 00:19:25.927 "zoned": false, 00:19:25.927 "supported_io_types": { 00:19:25.927 "read": true, 00:19:25.927 "write": true, 00:19:25.927 "unmap": true, 00:19:25.927 "flush": true, 00:19:25.927 "reset": true, 00:19:25.927 "nvme_admin": false, 00:19:25.927 "nvme_io": false, 00:19:25.927 "nvme_io_md": false, 00:19:25.927 "write_zeroes": true, 00:19:25.927 "zcopy": false, 00:19:25.927 "get_zone_info": false, 00:19:25.927 "zone_management": false, 00:19:25.927 "zone_append": false, 00:19:25.927 "compare": false, 00:19:25.927 "compare_and_write": false, 00:19:25.927 "abort": false, 00:19:25.927 "seek_hole": false, 00:19:25.927 "seek_data": false, 00:19:25.927 "copy": false, 00:19:25.927 "nvme_iov_md": false 00:19:25.927 }, 00:19:25.927 "memory_domains": [ 00:19:25.927 { 00:19:25.927 "dma_device_id": "system", 00:19:25.927 "dma_device_type": 1 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.927 "dma_device_type": 2 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "dma_device_id": "system", 00:19:25.927 "dma_device_type": 1 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.927 "dma_device_type": 2 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "dma_device_id": "system", 00:19:25.927 "dma_device_type": 1 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.927 "dma_device_type": 2 00:19:25.927 } 00:19:25.927 ], 00:19:25.927 "driver_specific": { 00:19:25.927 "raid": { 00:19:25.927 "uuid": "efc7ccd4-8b9b-4773-9ce3-8e33d223d656", 00:19:25.927 "strip_size_kb": 64, 00:19:25.927 "state": "online", 00:19:25.927 "raid_level": "raid0", 00:19:25.927 "superblock": true, 00:19:25.927 "num_base_bdevs": 3, 00:19:25.927 "num_base_bdevs_discovered": 3, 00:19:25.927 "num_base_bdevs_operational": 3, 00:19:25.927 "base_bdevs_list": [ 00:19:25.927 { 00:19:25.927 "name": "NewBaseBdev", 00:19:25.927 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:25.927 "is_configured": true, 00:19:25.927 "data_offset": 2048, 00:19:25.927 "data_size": 63488 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "name": "BaseBdev2", 00:19:25.927 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:25.927 "is_configured": true, 00:19:25.927 "data_offset": 2048, 00:19:25.927 "data_size": 63488 00:19:25.927 }, 00:19:25.927 { 00:19:25.927 "name": "BaseBdev3", 00:19:25.927 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:25.927 "is_configured": true, 00:19:25.927 "data_offset": 2048, 00:19:25.927 "data_size": 63488 00:19:25.927 } 00:19:25.927 ] 00:19:25.927 } 00:19:25.927 } 00:19:25.927 }' 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:25.927 BaseBdev2 00:19:25.927 BaseBdev3' 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:25.927 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.186 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.186 "name": "NewBaseBdev", 00:19:26.186 "aliases": [ 00:19:26.186 "852c3456-1c34-46a1-9a58-de1661eee406" 00:19:26.186 ], 00:19:26.186 "product_name": "Malloc disk", 00:19:26.186 "block_size": 512, 00:19:26.186 "num_blocks": 65536, 00:19:26.186 "uuid": "852c3456-1c34-46a1-9a58-de1661eee406", 00:19:26.186 "assigned_rate_limits": { 00:19:26.186 "rw_ios_per_sec": 0, 00:19:26.186 "rw_mbytes_per_sec": 0, 00:19:26.186 "r_mbytes_per_sec": 0, 00:19:26.186 "w_mbytes_per_sec": 0 00:19:26.186 }, 00:19:26.186 "claimed": true, 00:19:26.186 "claim_type": "exclusive_write", 00:19:26.186 "zoned": false, 00:19:26.186 "supported_io_types": { 00:19:26.186 "read": true, 00:19:26.186 "write": true, 00:19:26.186 "unmap": true, 00:19:26.186 "flush": true, 00:19:26.186 "reset": true, 00:19:26.186 "nvme_admin": false, 00:19:26.186 "nvme_io": false, 00:19:26.186 "nvme_io_md": false, 00:19:26.186 "write_zeroes": true, 00:19:26.186 "zcopy": true, 00:19:26.186 "get_zone_info": false, 00:19:26.186 "zone_management": false, 00:19:26.186 "zone_append": false, 00:19:26.186 "compare": false, 00:19:26.186 "compare_and_write": false, 00:19:26.186 "abort": true, 00:19:26.186 "seek_hole": false, 00:19:26.186 "seek_data": false, 00:19:26.186 "copy": true, 00:19:26.186 "nvme_iov_md": false 00:19:26.186 }, 00:19:26.186 "memory_domains": [ 00:19:26.186 { 00:19:26.186 "dma_device_id": "system", 00:19:26.186 "dma_device_type": 1 00:19:26.186 }, 00:19:26.186 { 00:19:26.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.186 "dma_device_type": 2 00:19:26.186 } 00:19:26.186 ], 00:19:26.186 "driver_specific": {} 00:19:26.186 }' 00:19:26.186 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.186 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.186 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.186 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.444 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.703 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.703 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:26.703 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:26.703 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:26.962 "name": "BaseBdev2", 00:19:26.962 "aliases": [ 00:19:26.962 "dc6e506e-c045-4520-bb15-da5b0893b5a7" 00:19:26.962 ], 00:19:26.962 "product_name": "Malloc disk", 00:19:26.962 "block_size": 512, 00:19:26.962 "num_blocks": 65536, 00:19:26.962 "uuid": "dc6e506e-c045-4520-bb15-da5b0893b5a7", 00:19:26.962 "assigned_rate_limits": { 00:19:26.962 "rw_ios_per_sec": 0, 00:19:26.962 "rw_mbytes_per_sec": 0, 00:19:26.962 "r_mbytes_per_sec": 0, 00:19:26.962 "w_mbytes_per_sec": 0 00:19:26.962 }, 00:19:26.962 "claimed": true, 00:19:26.962 "claim_type": "exclusive_write", 00:19:26.962 "zoned": false, 00:19:26.962 "supported_io_types": { 00:19:26.962 "read": true, 00:19:26.962 "write": true, 00:19:26.962 "unmap": true, 00:19:26.962 "flush": true, 00:19:26.962 "reset": true, 00:19:26.962 "nvme_admin": false, 00:19:26.962 "nvme_io": false, 00:19:26.962 "nvme_io_md": false, 00:19:26.962 "write_zeroes": true, 00:19:26.962 "zcopy": true, 00:19:26.962 "get_zone_info": false, 00:19:26.962 "zone_management": false, 00:19:26.962 "zone_append": false, 00:19:26.962 "compare": false, 00:19:26.962 "compare_and_write": false, 00:19:26.962 "abort": true, 00:19:26.962 "seek_hole": false, 00:19:26.962 "seek_data": false, 00:19:26.962 "copy": true, 00:19:26.962 "nvme_iov_md": false 00:19:26.962 }, 00:19:26.962 "memory_domains": [ 00:19:26.962 { 00:19:26.962 "dma_device_id": "system", 00:19:26.962 "dma_device_type": 1 00:19:26.962 }, 00:19:26.962 { 00:19:26.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.962 "dma_device_type": 2 00:19:26.962 } 00:19:26.962 ], 00:19:26.962 "driver_specific": {} 00:19:26.962 }' 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:26.962 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:27.220 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:27.479 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:27.479 "name": "BaseBdev3", 00:19:27.479 "aliases": [ 00:19:27.479 "15f61e3f-36a4-4b8e-979e-e3fcc330872b" 00:19:27.479 ], 00:19:27.479 "product_name": "Malloc disk", 00:19:27.479 "block_size": 512, 00:19:27.479 "num_blocks": 65536, 00:19:27.479 "uuid": "15f61e3f-36a4-4b8e-979e-e3fcc330872b", 00:19:27.479 "assigned_rate_limits": { 00:19:27.479 "rw_ios_per_sec": 0, 00:19:27.479 "rw_mbytes_per_sec": 0, 00:19:27.479 "r_mbytes_per_sec": 0, 00:19:27.479 "w_mbytes_per_sec": 0 00:19:27.479 }, 00:19:27.479 "claimed": true, 00:19:27.479 "claim_type": "exclusive_write", 00:19:27.479 "zoned": false, 00:19:27.479 "supported_io_types": { 00:19:27.479 "read": true, 00:19:27.479 "write": true, 00:19:27.479 "unmap": true, 00:19:27.479 "flush": true, 00:19:27.479 "reset": true, 00:19:27.479 "nvme_admin": false, 00:19:27.479 "nvme_io": false, 00:19:27.479 "nvme_io_md": false, 00:19:27.479 "write_zeroes": true, 00:19:27.479 "zcopy": true, 00:19:27.479 "get_zone_info": false, 00:19:27.479 "zone_management": false, 00:19:27.479 "zone_append": false, 00:19:27.479 "compare": false, 00:19:27.479 "compare_and_write": false, 00:19:27.479 "abort": true, 00:19:27.479 "seek_hole": false, 00:19:27.479 "seek_data": false, 00:19:27.479 "copy": true, 00:19:27.479 "nvme_iov_md": false 00:19:27.479 }, 00:19:27.479 "memory_domains": [ 00:19:27.479 { 00:19:27.479 "dma_device_id": "system", 00:19:27.479 "dma_device_type": 1 00:19:27.479 }, 00:19:27.479 { 00:19:27.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.479 "dma_device_type": 2 00:19:27.479 } 00:19:27.479 ], 00:19:27.479 "driver_specific": {} 00:19:27.479 }' 00:19:27.479 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.479 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:27.479 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:27.479 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.738 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:27.996 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.996 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:28.255 [2024-07-25 14:02:17.055536] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.255 [2024-07-25 14:02:17.055607] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.255 [2024-07-25 14:02:17.055800] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.255 [2024-07-25 14:02:17.055908] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.255 [2024-07-25 14:02:17.055927] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 125993 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 125993 ']' 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 125993 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125993 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125993' 00:19:28.255 killing process with pid 125993 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 125993 00:19:28.255 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 125993 00:19:28.255 [2024-07-25 14:02:17.096216] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.512 [2024-07-25 14:02:17.348911] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.447 ************************************ 00:19:29.447 END TEST raid_state_function_test_sb 00:19:29.447 ************************************ 00:19:29.447 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:29.447 00:19:29.447 real 0m34.659s 00:19:29.447 user 1m4.580s 00:19:29.447 sys 0m3.964s 00:19:29.447 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:29.447 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.706 14:02:18 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:19:29.706 14:02:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:29.706 14:02:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:29.706 14:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.706 ************************************ 00:19:29.706 START TEST raid_superblock_test 00:19:29.706 ************************************ 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=127022 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 127022 /var/tmp/spdk-raid.sock 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 127022 ']' 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.706 14:02:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.706 [2024-07-25 14:02:18.614478] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:29.706 [2024-07-25 14:02:18.615258] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127022 ] 00:19:29.965 [2024-07-25 14:02:18.780473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.965 [2024-07-25 14:02:19.000091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.223 [2024-07-25 14:02:19.201052] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:30.796 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:31.065 malloc1 00:19:31.065 14:02:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.323 [2024-07-25 14:02:20.146699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.323 [2024-07-25 14:02:20.146840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.323 [2024-07-25 14:02:20.146889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:31.323 [2024-07-25 14:02:20.146927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.323 [2024-07-25 14:02:20.149640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.323 [2024-07-25 14:02:20.149699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.323 pt1 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:31.323 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:31.582 malloc2 00:19:31.582 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:31.840 [2024-07-25 14:02:20.735852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:31.840 [2024-07-25 14:02:20.735995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.840 [2024-07-25 14:02:20.736051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:31.840 [2024-07-25 14:02:20.736076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.840 [2024-07-25 14:02:20.738725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.840 [2024-07-25 14:02:20.738785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:31.840 pt2 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:31.840 14:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:32.098 malloc3 00:19:32.098 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:32.356 [2024-07-25 14:02:21.363625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:32.356 [2024-07-25 14:02:21.363799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.356 [2024-07-25 14:02:21.363850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:32.356 [2024-07-25 14:02:21.363884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.356 [2024-07-25 14:02:21.366679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.356 [2024-07-25 14:02:21.366747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:32.356 pt3 00:19:32.356 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:32.356 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:32.356 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:32.614 [2024-07-25 14:02:21.647718] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.614 [2024-07-25 14:02:21.650016] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.614 [2024-07-25 14:02:21.650117] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:32.614 [2024-07-25 14:02:21.650328] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:19:32.614 [2024-07-25 14:02:21.650344] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:32.615 [2024-07-25 14:02:21.650509] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:32.615 [2024-07-25 14:02:21.650942] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:19:32.615 [2024-07-25 14:02:21.650958] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:19:32.615 [2024-07-25 14:02:21.651176] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.873 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.132 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.132 "name": "raid_bdev1", 00:19:33.132 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:33.132 "strip_size_kb": 64, 00:19:33.132 "state": "online", 00:19:33.132 "raid_level": "raid0", 00:19:33.132 "superblock": true, 00:19:33.132 "num_base_bdevs": 3, 00:19:33.132 "num_base_bdevs_discovered": 3, 00:19:33.132 "num_base_bdevs_operational": 3, 00:19:33.132 "base_bdevs_list": [ 00:19:33.132 { 00:19:33.132 "name": "pt1", 00:19:33.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.132 "is_configured": true, 00:19:33.132 "data_offset": 2048, 00:19:33.132 "data_size": 63488 00:19:33.132 }, 00:19:33.132 { 00:19:33.132 "name": "pt2", 00:19:33.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.132 "is_configured": true, 00:19:33.132 "data_offset": 2048, 00:19:33.132 "data_size": 63488 00:19:33.132 }, 00:19:33.132 { 00:19:33.132 "name": "pt3", 00:19:33.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:33.132 "is_configured": true, 00:19:33.132 "data_offset": 2048, 00:19:33.132 "data_size": 63488 00:19:33.132 } 00:19:33.132 ] 00:19:33.132 }' 00:19:33.132 14:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.132 14:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:33.699 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:33.958 [2024-07-25 14:02:22.916161] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:33.958 "name": "raid_bdev1", 00:19:33.958 "aliases": [ 00:19:33.958 "578b3283-854b-488f-9e4e-1f00eb44dcdd" 00:19:33.958 ], 00:19:33.958 "product_name": "Raid Volume", 00:19:33.958 "block_size": 512, 00:19:33.958 "num_blocks": 190464, 00:19:33.958 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:33.958 "assigned_rate_limits": { 00:19:33.958 "rw_ios_per_sec": 0, 00:19:33.958 "rw_mbytes_per_sec": 0, 00:19:33.958 "r_mbytes_per_sec": 0, 00:19:33.958 "w_mbytes_per_sec": 0 00:19:33.958 }, 00:19:33.958 "claimed": false, 00:19:33.958 "zoned": false, 00:19:33.958 "supported_io_types": { 00:19:33.958 "read": true, 00:19:33.958 "write": true, 00:19:33.958 "unmap": true, 00:19:33.958 "flush": true, 00:19:33.958 "reset": true, 00:19:33.958 "nvme_admin": false, 00:19:33.958 "nvme_io": false, 00:19:33.958 "nvme_io_md": false, 00:19:33.958 "write_zeroes": true, 00:19:33.958 "zcopy": false, 00:19:33.958 "get_zone_info": false, 00:19:33.958 "zone_management": false, 00:19:33.958 "zone_append": false, 00:19:33.958 "compare": false, 00:19:33.958 "compare_and_write": false, 00:19:33.958 "abort": false, 00:19:33.958 "seek_hole": false, 00:19:33.958 "seek_data": false, 00:19:33.958 "copy": false, 00:19:33.958 "nvme_iov_md": false 00:19:33.958 }, 00:19:33.958 "memory_domains": [ 00:19:33.958 { 00:19:33.958 "dma_device_id": "system", 00:19:33.958 "dma_device_type": 1 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.958 "dma_device_type": 2 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "dma_device_id": "system", 00:19:33.958 "dma_device_type": 1 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.958 "dma_device_type": 2 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "dma_device_id": "system", 00:19:33.958 "dma_device_type": 1 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.958 "dma_device_type": 2 00:19:33.958 } 00:19:33.958 ], 00:19:33.958 "driver_specific": { 00:19:33.958 "raid": { 00:19:33.958 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:33.958 "strip_size_kb": 64, 00:19:33.958 "state": "online", 00:19:33.958 "raid_level": "raid0", 00:19:33.958 "superblock": true, 00:19:33.958 "num_base_bdevs": 3, 00:19:33.958 "num_base_bdevs_discovered": 3, 00:19:33.958 "num_base_bdevs_operational": 3, 00:19:33.958 "base_bdevs_list": [ 00:19:33.958 { 00:19:33.958 "name": "pt1", 00:19:33.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.958 "is_configured": true, 00:19:33.958 "data_offset": 2048, 00:19:33.958 "data_size": 63488 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "name": "pt2", 00:19:33.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.958 "is_configured": true, 00:19:33.958 "data_offset": 2048, 00:19:33.958 "data_size": 63488 00:19:33.958 }, 00:19:33.958 { 00:19:33.958 "name": "pt3", 00:19:33.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:33.958 "is_configured": true, 00:19:33.958 "data_offset": 2048, 00:19:33.958 "data_size": 63488 00:19:33.958 } 00:19:33.958 ] 00:19:33.958 } 00:19:33.958 } 00:19:33.958 }' 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:33.958 pt2 00:19:33.958 pt3' 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:33.958 14:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:34.222 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:34.222 "name": "pt1", 00:19:34.222 "aliases": [ 00:19:34.222 "00000000-0000-0000-0000-000000000001" 00:19:34.222 ], 00:19:34.222 "product_name": "passthru", 00:19:34.222 "block_size": 512, 00:19:34.222 "num_blocks": 65536, 00:19:34.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.222 "assigned_rate_limits": { 00:19:34.222 "rw_ios_per_sec": 0, 00:19:34.222 "rw_mbytes_per_sec": 0, 00:19:34.222 "r_mbytes_per_sec": 0, 00:19:34.222 "w_mbytes_per_sec": 0 00:19:34.222 }, 00:19:34.222 "claimed": true, 00:19:34.222 "claim_type": "exclusive_write", 00:19:34.222 "zoned": false, 00:19:34.222 "supported_io_types": { 00:19:34.222 "read": true, 00:19:34.222 "write": true, 00:19:34.222 "unmap": true, 00:19:34.222 "flush": true, 00:19:34.222 "reset": true, 00:19:34.222 "nvme_admin": false, 00:19:34.222 "nvme_io": false, 00:19:34.222 "nvme_io_md": false, 00:19:34.222 "write_zeroes": true, 00:19:34.222 "zcopy": true, 00:19:34.222 "get_zone_info": false, 00:19:34.222 "zone_management": false, 00:19:34.222 "zone_append": false, 00:19:34.222 "compare": false, 00:19:34.222 "compare_and_write": false, 00:19:34.222 "abort": true, 00:19:34.222 "seek_hole": false, 00:19:34.222 "seek_data": false, 00:19:34.222 "copy": true, 00:19:34.222 "nvme_iov_md": false 00:19:34.222 }, 00:19:34.222 "memory_domains": [ 00:19:34.222 { 00:19:34.222 "dma_device_id": "system", 00:19:34.222 "dma_device_type": 1 00:19:34.222 }, 00:19:34.222 { 00:19:34.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.222 "dma_device_type": 2 00:19:34.223 } 00:19:34.223 ], 00:19:34.223 "driver_specific": { 00:19:34.223 "passthru": { 00:19:34.223 "name": "pt1", 00:19:34.223 "base_bdev_name": "malloc1" 00:19:34.223 } 00:19:34.223 } 00:19:34.223 }' 00:19:34.223 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.493 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.494 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.752 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.752 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.752 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:34.752 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:34.752 14:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.010 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.010 "name": "pt2", 00:19:35.010 "aliases": [ 00:19:35.010 "00000000-0000-0000-0000-000000000002" 00:19:35.010 ], 00:19:35.010 "product_name": "passthru", 00:19:35.010 "block_size": 512, 00:19:35.010 "num_blocks": 65536, 00:19:35.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.010 "assigned_rate_limits": { 00:19:35.010 "rw_ios_per_sec": 0, 00:19:35.010 "rw_mbytes_per_sec": 0, 00:19:35.010 "r_mbytes_per_sec": 0, 00:19:35.010 "w_mbytes_per_sec": 0 00:19:35.010 }, 00:19:35.010 "claimed": true, 00:19:35.010 "claim_type": "exclusive_write", 00:19:35.010 "zoned": false, 00:19:35.010 "supported_io_types": { 00:19:35.010 "read": true, 00:19:35.010 "write": true, 00:19:35.010 "unmap": true, 00:19:35.010 "flush": true, 00:19:35.010 "reset": true, 00:19:35.010 "nvme_admin": false, 00:19:35.010 "nvme_io": false, 00:19:35.010 "nvme_io_md": false, 00:19:35.010 "write_zeroes": true, 00:19:35.010 "zcopy": true, 00:19:35.010 "get_zone_info": false, 00:19:35.010 "zone_management": false, 00:19:35.010 "zone_append": false, 00:19:35.010 "compare": false, 00:19:35.010 "compare_and_write": false, 00:19:35.010 "abort": true, 00:19:35.010 "seek_hole": false, 00:19:35.010 "seek_data": false, 00:19:35.010 "copy": true, 00:19:35.010 "nvme_iov_md": false 00:19:35.010 }, 00:19:35.010 "memory_domains": [ 00:19:35.010 { 00:19:35.010 "dma_device_id": "system", 00:19:35.010 "dma_device_type": 1 00:19:35.010 }, 00:19:35.010 { 00:19:35.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.010 "dma_device_type": 2 00:19:35.010 } 00:19:35.010 ], 00:19:35.010 "driver_specific": { 00:19:35.010 "passthru": { 00:19:35.010 "name": "pt2", 00:19:35.010 "base_bdev_name": "malloc2" 00:19:35.010 } 00:19:35.010 } 00:19:35.010 }' 00:19:35.010 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.269 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:35.527 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:35.786 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:35.786 "name": "pt3", 00:19:35.786 "aliases": [ 00:19:35.786 "00000000-0000-0000-0000-000000000003" 00:19:35.786 ], 00:19:35.786 "product_name": "passthru", 00:19:35.786 "block_size": 512, 00:19:35.786 "num_blocks": 65536, 00:19:35.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:35.786 "assigned_rate_limits": { 00:19:35.786 "rw_ios_per_sec": 0, 00:19:35.786 "rw_mbytes_per_sec": 0, 00:19:35.786 "r_mbytes_per_sec": 0, 00:19:35.786 "w_mbytes_per_sec": 0 00:19:35.786 }, 00:19:35.786 "claimed": true, 00:19:35.786 "claim_type": "exclusive_write", 00:19:35.786 "zoned": false, 00:19:35.786 "supported_io_types": { 00:19:35.786 "read": true, 00:19:35.786 "write": true, 00:19:35.786 "unmap": true, 00:19:35.786 "flush": true, 00:19:35.786 "reset": true, 00:19:35.786 "nvme_admin": false, 00:19:35.786 "nvme_io": false, 00:19:35.786 "nvme_io_md": false, 00:19:35.786 "write_zeroes": true, 00:19:35.786 "zcopy": true, 00:19:35.786 "get_zone_info": false, 00:19:35.786 "zone_management": false, 00:19:35.786 "zone_append": false, 00:19:35.786 "compare": false, 00:19:35.787 "compare_and_write": false, 00:19:35.787 "abort": true, 00:19:35.787 "seek_hole": false, 00:19:35.787 "seek_data": false, 00:19:35.787 "copy": true, 00:19:35.787 "nvme_iov_md": false 00:19:35.787 }, 00:19:35.787 "memory_domains": [ 00:19:35.787 { 00:19:35.787 "dma_device_id": "system", 00:19:35.787 "dma_device_type": 1 00:19:35.787 }, 00:19:35.787 { 00:19:35.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.787 "dma_device_type": 2 00:19:35.787 } 00:19:35.787 ], 00:19:35.787 "driver_specific": { 00:19:35.787 "passthru": { 00:19:35.787 "name": "pt3", 00:19:35.787 "base_bdev_name": "malloc3" 00:19:35.787 } 00:19:35.787 } 00:19:35.787 }' 00:19:35.787 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.787 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:35.787 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:35.787 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.046 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:36.046 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:36.046 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.046 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:36.046 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:36.046 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.046 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:36.305 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:36.305 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:36.305 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.563 [2024-07-25 14:02:25.412924] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.563 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=578b3283-854b-488f-9e4e-1f00eb44dcdd 00:19:36.563 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 578b3283-854b-488f-9e4e-1f00eb44dcdd ']' 00:19:36.563 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:36.822 [2024-07-25 14:02:25.704413] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.822 [2024-07-25 14:02:25.704479] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.822 [2024-07-25 14:02:25.704575] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.822 [2024-07-25 14:02:25.704659] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.822 [2024-07-25 14:02:25.704674] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:19:36.822 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.822 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:37.081 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:37.081 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:37.081 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.081 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:37.339 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.339 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:37.598 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.598 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:37.856 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:37.856 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:38.426 [2024-07-25 14:02:27.398225] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:38.426 [2024-07-25 14:02:27.400521] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:38.426 [2024-07-25 14:02:27.400626] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:38.426 [2024-07-25 14:02:27.400711] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:38.426 [2024-07-25 14:02:27.400854] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:38.426 [2024-07-25 14:02:27.400931] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:38.426 [2024-07-25 14:02:27.400986] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.426 [2024-07-25 14:02:27.401002] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:19:38.426 request: 00:19:38.426 { 00:19:38.426 "name": "raid_bdev1", 00:19:38.426 "raid_level": "raid0", 00:19:38.426 "base_bdevs": [ 00:19:38.426 "malloc1", 00:19:38.426 "malloc2", 00:19:38.426 "malloc3" 00:19:38.426 ], 00:19:38.426 "strip_size_kb": 64, 00:19:38.426 "superblock": false, 00:19:38.426 "method": "bdev_raid_create", 00:19:38.426 "req_id": 1 00:19:38.426 } 00:19:38.426 Got JSON-RPC error response 00:19:38.426 response: 00:19:38.426 { 00:19:38.426 "code": -17, 00:19:38.426 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:38.426 } 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.426 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.427 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:38.684 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:38.684 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:38.684 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.942 [2024-07-25 14:02:27.890181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.942 [2024-07-25 14:02:27.890321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.942 [2024-07-25 14:02:27.890375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:38.942 [2024-07-25 14:02:27.890404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.942 [2024-07-25 14:02:27.893083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.942 [2024-07-25 14:02:27.893150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.942 [2024-07-25 14:02:27.893311] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:38.942 [2024-07-25 14:02:27.893378] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.942 pt1 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.942 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.200 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.200 "name": "raid_bdev1", 00:19:39.200 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:39.200 "strip_size_kb": 64, 00:19:39.200 "state": "configuring", 00:19:39.200 "raid_level": "raid0", 00:19:39.200 "superblock": true, 00:19:39.200 "num_base_bdevs": 3, 00:19:39.200 "num_base_bdevs_discovered": 1, 00:19:39.200 "num_base_bdevs_operational": 3, 00:19:39.200 "base_bdevs_list": [ 00:19:39.200 { 00:19:39.200 "name": "pt1", 00:19:39.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.200 "is_configured": true, 00:19:39.200 "data_offset": 2048, 00:19:39.200 "data_size": 63488 00:19:39.200 }, 00:19:39.200 { 00:19:39.200 "name": null, 00:19:39.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.200 "is_configured": false, 00:19:39.200 "data_offset": 2048, 00:19:39.200 "data_size": 63488 00:19:39.200 }, 00:19:39.200 { 00:19:39.200 "name": null, 00:19:39.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.200 "is_configured": false, 00:19:39.200 "data_offset": 2048, 00:19:39.200 "data_size": 63488 00:19:39.200 } 00:19:39.200 ] 00:19:39.200 }' 00:19:39.200 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.200 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:19:39.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.025 [2024-07-25 14:02:29.042424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.025 [2024-07-25 14:02:29.042571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.025 [2024-07-25 14:02:29.042632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:40.025 [2024-07-25 14:02:29.042666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.025 [2024-07-25 14:02:29.043387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.025 [2024-07-25 14:02:29.043449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.025 [2024-07-25 14:02:29.043604] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.025 [2024-07-25 14:02:29.043648] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.025 pt2 00:19:40.025 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:40.283 [2024-07-25 14:02:29.326515] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.541 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.800 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.800 "name": "raid_bdev1", 00:19:40.800 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:40.800 "strip_size_kb": 64, 00:19:40.800 "state": "configuring", 00:19:40.800 "raid_level": "raid0", 00:19:40.800 "superblock": true, 00:19:40.800 "num_base_bdevs": 3, 00:19:40.800 "num_base_bdevs_discovered": 1, 00:19:40.800 "num_base_bdevs_operational": 3, 00:19:40.800 "base_bdevs_list": [ 00:19:40.800 { 00:19:40.800 "name": "pt1", 00:19:40.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.800 "is_configured": true, 00:19:40.800 "data_offset": 2048, 00:19:40.800 "data_size": 63488 00:19:40.800 }, 00:19:40.800 { 00:19:40.800 "name": null, 00:19:40.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.800 "is_configured": false, 00:19:40.800 "data_offset": 2048, 00:19:40.800 "data_size": 63488 00:19:40.800 }, 00:19:40.800 { 00:19:40.800 "name": null, 00:19:40.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.800 "is_configured": false, 00:19:40.801 "data_offset": 2048, 00:19:40.801 "data_size": 63488 00:19:40.801 } 00:19:40.801 ] 00:19:40.801 }' 00:19:40.801 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.801 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:41.368 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:41.368 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.626 [2024-07-25 14:02:30.542715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.626 [2024-07-25 14:02:30.542881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.626 [2024-07-25 14:02:30.542942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:41.626 [2024-07-25 14:02:30.543008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.626 [2024-07-25 14:02:30.543803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.626 [2024-07-25 14:02:30.543871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.626 [2024-07-25 14:02:30.544006] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:41.626 [2024-07-25 14:02:30.544039] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.626 pt2 00:19:41.626 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:41.626 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:41.626 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:41.884 [2024-07-25 14:02:30.834793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:41.884 [2024-07-25 14:02:30.834938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.884 [2024-07-25 14:02:30.834988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:41.884 [2024-07-25 14:02:30.835028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.884 [2024-07-25 14:02:30.835677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.884 [2024-07-25 14:02:30.835744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:41.884 [2024-07-25 14:02:30.835881] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:41.884 [2024-07-25 14:02:30.835914] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.884 [2024-07-25 14:02:30.836083] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:19:41.884 [2024-07-25 14:02:30.836112] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:41.884 [2024-07-25 14:02:30.836223] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.884 [2024-07-25 14:02:30.836605] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:19:41.884 [2024-07-25 14:02:30.836649] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:19:41.884 [2024-07-25 14:02:30.836816] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.884 pt3 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.884 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.143 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.143 "name": "raid_bdev1", 00:19:42.143 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:42.143 "strip_size_kb": 64, 00:19:42.143 "state": "online", 00:19:42.143 "raid_level": "raid0", 00:19:42.143 "superblock": true, 00:19:42.143 "num_base_bdevs": 3, 00:19:42.143 "num_base_bdevs_discovered": 3, 00:19:42.143 "num_base_bdevs_operational": 3, 00:19:42.143 "base_bdevs_list": [ 00:19:42.143 { 00:19:42.143 "name": "pt1", 00:19:42.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.143 "is_configured": true, 00:19:42.143 "data_offset": 2048, 00:19:42.143 "data_size": 63488 00:19:42.143 }, 00:19:42.143 { 00:19:42.143 "name": "pt2", 00:19:42.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.143 "is_configured": true, 00:19:42.143 "data_offset": 2048, 00:19:42.143 "data_size": 63488 00:19:42.143 }, 00:19:42.143 { 00:19:42.143 "name": "pt3", 00:19:42.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.143 "is_configured": true, 00:19:42.143 "data_offset": 2048, 00:19:42.143 "data_size": 63488 00:19:42.143 } 00:19:42.143 ] 00:19:42.143 }' 00:19:42.143 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.143 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.076 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:43.334 [2024-07-25 14:02:32.155329] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.334 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:43.334 "name": "raid_bdev1", 00:19:43.334 "aliases": [ 00:19:43.334 "578b3283-854b-488f-9e4e-1f00eb44dcdd" 00:19:43.334 ], 00:19:43.334 "product_name": "Raid Volume", 00:19:43.334 "block_size": 512, 00:19:43.334 "num_blocks": 190464, 00:19:43.334 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:43.334 "assigned_rate_limits": { 00:19:43.334 "rw_ios_per_sec": 0, 00:19:43.334 "rw_mbytes_per_sec": 0, 00:19:43.334 "r_mbytes_per_sec": 0, 00:19:43.334 "w_mbytes_per_sec": 0 00:19:43.334 }, 00:19:43.334 "claimed": false, 00:19:43.334 "zoned": false, 00:19:43.334 "supported_io_types": { 00:19:43.334 "read": true, 00:19:43.334 "write": true, 00:19:43.334 "unmap": true, 00:19:43.334 "flush": true, 00:19:43.334 "reset": true, 00:19:43.334 "nvme_admin": false, 00:19:43.335 "nvme_io": false, 00:19:43.335 "nvme_io_md": false, 00:19:43.335 "write_zeroes": true, 00:19:43.335 "zcopy": false, 00:19:43.335 "get_zone_info": false, 00:19:43.335 "zone_management": false, 00:19:43.335 "zone_append": false, 00:19:43.335 "compare": false, 00:19:43.335 "compare_and_write": false, 00:19:43.335 "abort": false, 00:19:43.335 "seek_hole": false, 00:19:43.335 "seek_data": false, 00:19:43.335 "copy": false, 00:19:43.335 "nvme_iov_md": false 00:19:43.335 }, 00:19:43.335 "memory_domains": [ 00:19:43.335 { 00:19:43.335 "dma_device_id": "system", 00:19:43.335 "dma_device_type": 1 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.335 "dma_device_type": 2 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "dma_device_id": "system", 00:19:43.335 "dma_device_type": 1 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.335 "dma_device_type": 2 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "dma_device_id": "system", 00:19:43.335 "dma_device_type": 1 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.335 "dma_device_type": 2 00:19:43.335 } 00:19:43.335 ], 00:19:43.335 "driver_specific": { 00:19:43.335 "raid": { 00:19:43.335 "uuid": "578b3283-854b-488f-9e4e-1f00eb44dcdd", 00:19:43.335 "strip_size_kb": 64, 00:19:43.335 "state": "online", 00:19:43.335 "raid_level": "raid0", 00:19:43.335 "superblock": true, 00:19:43.335 "num_base_bdevs": 3, 00:19:43.335 "num_base_bdevs_discovered": 3, 00:19:43.335 "num_base_bdevs_operational": 3, 00:19:43.335 "base_bdevs_list": [ 00:19:43.335 { 00:19:43.335 "name": "pt1", 00:19:43.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:43.335 "is_configured": true, 00:19:43.335 "data_offset": 2048, 00:19:43.335 "data_size": 63488 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "name": "pt2", 00:19:43.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.335 "is_configured": true, 00:19:43.335 "data_offset": 2048, 00:19:43.335 "data_size": 63488 00:19:43.335 }, 00:19:43.335 { 00:19:43.335 "name": "pt3", 00:19:43.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.335 "is_configured": true, 00:19:43.335 "data_offset": 2048, 00:19:43.335 "data_size": 63488 00:19:43.335 } 00:19:43.335 ] 00:19:43.335 } 00:19:43.335 } 00:19:43.335 }' 00:19:43.335 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:43.335 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:43.335 pt2 00:19:43.335 pt3' 00:19:43.335 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:43.335 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:43.335 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:43.593 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:43.593 "name": "pt1", 00:19:43.593 "aliases": [ 00:19:43.593 "00000000-0000-0000-0000-000000000001" 00:19:43.593 ], 00:19:43.593 "product_name": "passthru", 00:19:43.593 "block_size": 512, 00:19:43.593 "num_blocks": 65536, 00:19:43.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:43.593 "assigned_rate_limits": { 00:19:43.593 "rw_ios_per_sec": 0, 00:19:43.593 "rw_mbytes_per_sec": 0, 00:19:43.593 "r_mbytes_per_sec": 0, 00:19:43.593 "w_mbytes_per_sec": 0 00:19:43.593 }, 00:19:43.593 "claimed": true, 00:19:43.593 "claim_type": "exclusive_write", 00:19:43.593 "zoned": false, 00:19:43.593 "supported_io_types": { 00:19:43.593 "read": true, 00:19:43.593 "write": true, 00:19:43.593 "unmap": true, 00:19:43.593 "flush": true, 00:19:43.593 "reset": true, 00:19:43.593 "nvme_admin": false, 00:19:43.593 "nvme_io": false, 00:19:43.593 "nvme_io_md": false, 00:19:43.593 "write_zeroes": true, 00:19:43.593 "zcopy": true, 00:19:43.593 "get_zone_info": false, 00:19:43.593 "zone_management": false, 00:19:43.593 "zone_append": false, 00:19:43.593 "compare": false, 00:19:43.593 "compare_and_write": false, 00:19:43.593 "abort": true, 00:19:43.593 "seek_hole": false, 00:19:43.593 "seek_data": false, 00:19:43.593 "copy": true, 00:19:43.593 "nvme_iov_md": false 00:19:43.593 }, 00:19:43.593 "memory_domains": [ 00:19:43.593 { 00:19:43.593 "dma_device_id": "system", 00:19:43.593 "dma_device_type": 1 00:19:43.593 }, 00:19:43.593 { 00:19:43.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.593 "dma_device_type": 2 00:19:43.593 } 00:19:43.593 ], 00:19:43.593 "driver_specific": { 00:19:43.593 "passthru": { 00:19:43.593 "name": "pt1", 00:19:43.593 "base_bdev_name": "malloc1" 00:19:43.593 } 00:19:43.593 } 00:19:43.593 }' 00:19:43.594 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.594 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:43.594 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:43.594 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.852 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.111 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.111 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:44.111 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:44.111 14:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:44.369 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:44.369 "name": "pt2", 00:19:44.369 "aliases": [ 00:19:44.369 "00000000-0000-0000-0000-000000000002" 00:19:44.369 ], 00:19:44.369 "product_name": "passthru", 00:19:44.369 "block_size": 512, 00:19:44.369 "num_blocks": 65536, 00:19:44.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.369 "assigned_rate_limits": { 00:19:44.369 "rw_ios_per_sec": 0, 00:19:44.369 "rw_mbytes_per_sec": 0, 00:19:44.369 "r_mbytes_per_sec": 0, 00:19:44.369 "w_mbytes_per_sec": 0 00:19:44.369 }, 00:19:44.369 "claimed": true, 00:19:44.369 "claim_type": "exclusive_write", 00:19:44.369 "zoned": false, 00:19:44.369 "supported_io_types": { 00:19:44.369 "read": true, 00:19:44.369 "write": true, 00:19:44.369 "unmap": true, 00:19:44.369 "flush": true, 00:19:44.369 "reset": true, 00:19:44.369 "nvme_admin": false, 00:19:44.369 "nvme_io": false, 00:19:44.369 "nvme_io_md": false, 00:19:44.369 "write_zeroes": true, 00:19:44.369 "zcopy": true, 00:19:44.369 "get_zone_info": false, 00:19:44.369 "zone_management": false, 00:19:44.369 "zone_append": false, 00:19:44.369 "compare": false, 00:19:44.369 "compare_and_write": false, 00:19:44.369 "abort": true, 00:19:44.369 "seek_hole": false, 00:19:44.369 "seek_data": false, 00:19:44.369 "copy": true, 00:19:44.369 "nvme_iov_md": false 00:19:44.369 }, 00:19:44.369 "memory_domains": [ 00:19:44.369 { 00:19:44.369 "dma_device_id": "system", 00:19:44.369 "dma_device_type": 1 00:19:44.369 }, 00:19:44.369 { 00:19:44.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.369 "dma_device_type": 2 00:19:44.369 } 00:19:44.369 ], 00:19:44.369 "driver_specific": { 00:19:44.369 "passthru": { 00:19:44.369 "name": "pt2", 00:19:44.369 "base_bdev_name": "malloc2" 00:19:44.369 } 00:19:44.369 } 00:19:44.369 }' 00:19:44.369 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.369 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:44.369 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:44.369 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:44.628 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:45.195 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:45.195 "name": "pt3", 00:19:45.195 "aliases": [ 00:19:45.195 "00000000-0000-0000-0000-000000000003" 00:19:45.195 ], 00:19:45.195 "product_name": "passthru", 00:19:45.195 "block_size": 512, 00:19:45.195 "num_blocks": 65536, 00:19:45.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:45.195 "assigned_rate_limits": { 00:19:45.195 "rw_ios_per_sec": 0, 00:19:45.195 "rw_mbytes_per_sec": 0, 00:19:45.195 "r_mbytes_per_sec": 0, 00:19:45.195 "w_mbytes_per_sec": 0 00:19:45.195 }, 00:19:45.195 "claimed": true, 00:19:45.195 "claim_type": "exclusive_write", 00:19:45.195 "zoned": false, 00:19:45.195 "supported_io_types": { 00:19:45.195 "read": true, 00:19:45.195 "write": true, 00:19:45.195 "unmap": true, 00:19:45.195 "flush": true, 00:19:45.195 "reset": true, 00:19:45.195 "nvme_admin": false, 00:19:45.195 "nvme_io": false, 00:19:45.195 "nvme_io_md": false, 00:19:45.195 "write_zeroes": true, 00:19:45.195 "zcopy": true, 00:19:45.195 "get_zone_info": false, 00:19:45.195 "zone_management": false, 00:19:45.195 "zone_append": false, 00:19:45.195 "compare": false, 00:19:45.195 "compare_and_write": false, 00:19:45.195 "abort": true, 00:19:45.195 "seek_hole": false, 00:19:45.195 "seek_data": false, 00:19:45.195 "copy": true, 00:19:45.195 "nvme_iov_md": false 00:19:45.195 }, 00:19:45.195 "memory_domains": [ 00:19:45.195 { 00:19:45.195 "dma_device_id": "system", 00:19:45.195 "dma_device_type": 1 00:19:45.195 }, 00:19:45.195 { 00:19:45.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.195 "dma_device_type": 2 00:19:45.195 } 00:19:45.195 ], 00:19:45.195 "driver_specific": { 00:19:45.195 "passthru": { 00:19:45.195 "name": "pt3", 00:19:45.195 "base_bdev_name": "malloc3" 00:19:45.195 } 00:19:45.195 } 00:19:45.195 }' 00:19:45.195 14:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:45.195 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:45.452 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:45.710 [2024-07-25 14:02:34.675807] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 578b3283-854b-488f-9e4e-1f00eb44dcdd '!=' 578b3283-854b-488f-9e4e-1f00eb44dcdd ']' 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 127022 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 127022 ']' 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 127022 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127022 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127022' 00:19:45.710 killing process with pid 127022 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 127022 00:19:45.710 14:02:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 127022 00:19:45.710 [2024-07-25 14:02:34.723560] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.711 [2024-07-25 14:02:34.723665] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.711 [2024-07-25 14:02:34.723742] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.711 [2024-07-25 14:02:34.723769] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:19:45.968 [2024-07-25 14:02:34.979311] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.344 ************************************ 00:19:47.344 END TEST raid_superblock_test 00:19:47.344 ************************************ 00:19:47.344 14:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:47.344 00:19:47.344 real 0m17.611s 00:19:47.344 user 0m31.969s 00:19:47.344 sys 0m1.883s 00:19:47.344 14:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:47.344 14:02:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.344 14:02:36 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:19:47.344 14:02:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:47.344 14:02:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.344 14:02:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.344 ************************************ 00:19:47.344 START TEST raid_read_error_test 00:19:47.344 ************************************ 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.UAI0SoTawH 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=127543 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 127543 /var/tmp/spdk-raid.sock 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 127543 ']' 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:47.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.344 14:02:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.344 [2024-07-25 14:02:36.276603] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:47.344 [2024-07-25 14:02:36.276811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127543 ] 00:19:47.603 [2024-07-25 14:02:36.433035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.916 [2024-07-25 14:02:36.673932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.916 [2024-07-25 14:02:36.871683] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.485 14:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.485 14:02:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:48.485 14:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:48.485 14:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:48.743 BaseBdev1_malloc 00:19:48.743 14:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:49.001 true 00:19:49.001 14:02:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:49.260 [2024-07-25 14:02:38.135454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:49.260 [2024-07-25 14:02:38.135600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.260 [2024-07-25 14:02:38.135650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:49.260 [2024-07-25 14:02:38.135678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.260 [2024-07-25 14:02:38.138398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.260 [2024-07-25 14:02:38.138457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:49.260 BaseBdev1 00:19:49.260 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:49.260 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:49.519 BaseBdev2_malloc 00:19:49.519 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:49.778 true 00:19:49.778 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:50.037 [2024-07-25 14:02:38.930630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:50.037 [2024-07-25 14:02:38.930784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.037 [2024-07-25 14:02:38.930844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:50.037 [2024-07-25 14:02:38.930870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.037 [2024-07-25 14:02:38.933521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.037 [2024-07-25 14:02:38.933585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:50.037 BaseBdev2 00:19:50.037 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:50.037 14:02:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:50.295 BaseBdev3_malloc 00:19:50.295 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:50.554 true 00:19:50.554 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:50.812 [2024-07-25 14:02:39.725688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:50.812 [2024-07-25 14:02:39.725838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.812 [2024-07-25 14:02:39.725889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:50.812 [2024-07-25 14:02:39.725921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.812 [2024-07-25 14:02:39.728553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.812 [2024-07-25 14:02:39.728627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:50.812 BaseBdev3 00:19:50.812 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:51.071 [2024-07-25 14:02:39.965766] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.071 [2024-07-25 14:02:39.967993] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.071 [2024-07-25 14:02:39.968094] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.071 [2024-07-25 14:02:39.968390] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:19:51.071 [2024-07-25 14:02:39.968414] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:51.071 [2024-07-25 14:02:39.968564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:51.071 [2024-07-25 14:02:39.969011] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:19:51.071 [2024-07-25 14:02:39.969033] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:19:51.071 [2024-07-25 14:02:39.969224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.071 14:02:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.329 14:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.329 "name": "raid_bdev1", 00:19:51.329 "uuid": "a3d3452b-5bb6-42eb-8878-7e1c14d57084", 00:19:51.329 "strip_size_kb": 64, 00:19:51.329 "state": "online", 00:19:51.329 "raid_level": "raid0", 00:19:51.329 "superblock": true, 00:19:51.329 "num_base_bdevs": 3, 00:19:51.329 "num_base_bdevs_discovered": 3, 00:19:51.329 "num_base_bdevs_operational": 3, 00:19:51.329 "base_bdevs_list": [ 00:19:51.329 { 00:19:51.329 "name": "BaseBdev1", 00:19:51.329 "uuid": "a48acb0b-76ea-547b-87a2-8a089c891084", 00:19:51.329 "is_configured": true, 00:19:51.329 "data_offset": 2048, 00:19:51.329 "data_size": 63488 00:19:51.329 }, 00:19:51.329 { 00:19:51.329 "name": "BaseBdev2", 00:19:51.329 "uuid": "221193ba-9e26-5b7a-a39d-0e8a773c7d08", 00:19:51.329 "is_configured": true, 00:19:51.329 "data_offset": 2048, 00:19:51.329 "data_size": 63488 00:19:51.329 }, 00:19:51.329 { 00:19:51.329 "name": "BaseBdev3", 00:19:51.329 "uuid": "d5f53153-87aa-5397-acf2-9451ae3a1001", 00:19:51.329 "is_configured": true, 00:19:51.329 "data_offset": 2048, 00:19:51.329 "data_size": 63488 00:19:51.329 } 00:19:51.329 ] 00:19:51.329 }' 00:19:51.329 14:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.329 14:02:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.912 14:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:19:51.912 14:02:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:52.169 [2024-07-25 14:02:41.051260] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:53.101 14:02:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=3 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.360 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.617 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.617 "name": "raid_bdev1", 00:19:53.617 "uuid": "a3d3452b-5bb6-42eb-8878-7e1c14d57084", 00:19:53.617 "strip_size_kb": 64, 00:19:53.617 "state": "online", 00:19:53.617 "raid_level": "raid0", 00:19:53.617 "superblock": true, 00:19:53.617 "num_base_bdevs": 3, 00:19:53.617 "num_base_bdevs_discovered": 3, 00:19:53.617 "num_base_bdevs_operational": 3, 00:19:53.617 "base_bdevs_list": [ 00:19:53.617 { 00:19:53.617 "name": "BaseBdev1", 00:19:53.617 "uuid": "a48acb0b-76ea-547b-87a2-8a089c891084", 00:19:53.617 "is_configured": true, 00:19:53.617 "data_offset": 2048, 00:19:53.617 "data_size": 63488 00:19:53.617 }, 00:19:53.617 { 00:19:53.617 "name": "BaseBdev2", 00:19:53.617 "uuid": "221193ba-9e26-5b7a-a39d-0e8a773c7d08", 00:19:53.617 "is_configured": true, 00:19:53.617 "data_offset": 2048, 00:19:53.617 "data_size": 63488 00:19:53.617 }, 00:19:53.617 { 00:19:53.617 "name": "BaseBdev3", 00:19:53.617 "uuid": "d5f53153-87aa-5397-acf2-9451ae3a1001", 00:19:53.617 "is_configured": true, 00:19:53.617 "data_offset": 2048, 00:19:53.617 "data_size": 63488 00:19:53.617 } 00:19:53.617 ] 00:19:53.617 }' 00:19:53.617 14:02:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.617 14:02:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.183 14:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:54.442 [2024-07-25 14:02:43.470088] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.442 [2024-07-25 14:02:43.470159] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.442 [2024-07-25 14:02:43.473363] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.442 [2024-07-25 14:02:43.473436] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.442 [2024-07-25 14:02:43.473479] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.442 [2024-07-25 14:02:43.473490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:19:54.442 0 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 127543 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 127543 ']' 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 127543 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127543 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.702 killing process with pid 127543 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127543' 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 127543 00:19:54.702 14:02:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 127543 00:19:54.702 [2024-07-25 14:02:43.522556] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.702 [2024-07-25 14:02:43.720503] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.UAI0SoTawH 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:19:56.076 00:19:56.076 real 0m8.709s 00:19:56.076 user 0m13.558s 00:19:56.076 sys 0m0.974s 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.076 ************************************ 00:19:56.076 END TEST raid_read_error_test 00:19:56.076 ************************************ 00:19:56.076 14:02:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.076 14:02:44 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:19:56.076 14:02:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:56.076 14:02:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.076 14:02:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.076 ************************************ 00:19:56.076 START TEST raid_write_error_test 00:19:56.076 ************************************ 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.FXuNc8vX8n 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=127749 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 127749 /var/tmp/spdk-raid.sock 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 127749 ']' 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.076 14:02:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.076 [2024-07-25 14:02:45.060669] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:19:56.076 [2024-07-25 14:02:45.061534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127749 ] 00:19:56.334 [2024-07-25 14:02:45.225155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.593 [2024-07-25 14:02:45.465690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.850 [2024-07-25 14:02:45.665134] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.108 14:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.108 14:02:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:57.108 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:57.108 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:57.366 BaseBdev1_malloc 00:19:57.366 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:57.624 true 00:19:57.624 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:57.882 [2024-07-25 14:02:46.854860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:57.882 [2024-07-25 14:02:46.855017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.882 [2024-07-25 14:02:46.855077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:57.882 [2024-07-25 14:02:46.855107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.882 [2024-07-25 14:02:46.857810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.882 [2024-07-25 14:02:46.857880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.882 BaseBdev1 00:19:57.882 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:57.882 14:02:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:58.450 BaseBdev2_malloc 00:19:58.450 14:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:58.450 true 00:19:58.450 14:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:59.016 [2024-07-25 14:02:47.781337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:59.016 [2024-07-25 14:02:47.781498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.016 [2024-07-25 14:02:47.781549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:59.016 [2024-07-25 14:02:47.781574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.016 [2024-07-25 14:02:47.784216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.016 [2024-07-25 14:02:47.784284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:59.016 BaseBdev2 00:19:59.016 14:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:19:59.016 14:02:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:59.016 BaseBdev3_malloc 00:19:59.275 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:59.533 true 00:19:59.533 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:59.792 [2024-07-25 14:02:48.621078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:59.792 [2024-07-25 14:02:48.621219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.792 [2024-07-25 14:02:48.621266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:59.792 [2024-07-25 14:02:48.621298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.792 [2024-07-25 14:02:48.624007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.792 [2024-07-25 14:02:48.624087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:59.792 BaseBdev3 00:19:59.792 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:00.050 [2024-07-25 14:02:48.869190] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.050 [2024-07-25 14:02:48.871449] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.050 [2024-07-25 14:02:48.871560] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.050 [2024-07-25 14:02:48.871814] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:20:00.050 [2024-07-25 14:02:48.871831] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:00.050 [2024-07-25 14:02:48.871983] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:00.050 [2024-07-25 14:02:48.872418] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:20:00.050 [2024-07-25 14:02:48.872445] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:20:00.050 [2024-07-25 14:02:48.872655] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.050 14:02:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.309 14:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.309 "name": "raid_bdev1", 00:20:00.309 "uuid": "0513c93d-ae28-4262-9338-faa0ae780cf0", 00:20:00.309 "strip_size_kb": 64, 00:20:00.309 "state": "online", 00:20:00.309 "raid_level": "raid0", 00:20:00.309 "superblock": true, 00:20:00.309 "num_base_bdevs": 3, 00:20:00.309 "num_base_bdevs_discovered": 3, 00:20:00.309 "num_base_bdevs_operational": 3, 00:20:00.309 "base_bdevs_list": [ 00:20:00.309 { 00:20:00.309 "name": "BaseBdev1", 00:20:00.309 "uuid": "7c58a084-012b-58f2-a74d-f09f5d43c056", 00:20:00.309 "is_configured": true, 00:20:00.309 "data_offset": 2048, 00:20:00.309 "data_size": 63488 00:20:00.309 }, 00:20:00.309 { 00:20:00.309 "name": "BaseBdev2", 00:20:00.309 "uuid": "859aa245-c4e0-5307-8569-9dc62b8df20c", 00:20:00.309 "is_configured": true, 00:20:00.309 "data_offset": 2048, 00:20:00.309 "data_size": 63488 00:20:00.309 }, 00:20:00.309 { 00:20:00.309 "name": "BaseBdev3", 00:20:00.309 "uuid": "39d7b377-d2d9-50b0-8b01-59b66f0cea6a", 00:20:00.309 "is_configured": true, 00:20:00.309 "data_offset": 2048, 00:20:00.309 "data_size": 63488 00:20:00.309 } 00:20:00.309 ] 00:20:00.309 }' 00:20:00.309 14:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.309 14:02:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.876 14:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:20:00.876 14:02:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:00.876 [2024-07-25 14:02:49.878696] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:01.810 14:02:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=3 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.067 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.325 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.325 "name": "raid_bdev1", 00:20:02.325 "uuid": "0513c93d-ae28-4262-9338-faa0ae780cf0", 00:20:02.325 "strip_size_kb": 64, 00:20:02.325 "state": "online", 00:20:02.325 "raid_level": "raid0", 00:20:02.325 "superblock": true, 00:20:02.325 "num_base_bdevs": 3, 00:20:02.325 "num_base_bdevs_discovered": 3, 00:20:02.325 "num_base_bdevs_operational": 3, 00:20:02.325 "base_bdevs_list": [ 00:20:02.325 { 00:20:02.325 "name": "BaseBdev1", 00:20:02.325 "uuid": "7c58a084-012b-58f2-a74d-f09f5d43c056", 00:20:02.325 "is_configured": true, 00:20:02.325 "data_offset": 2048, 00:20:02.325 "data_size": 63488 00:20:02.325 }, 00:20:02.325 { 00:20:02.325 "name": "BaseBdev2", 00:20:02.325 "uuid": "859aa245-c4e0-5307-8569-9dc62b8df20c", 00:20:02.325 "is_configured": true, 00:20:02.325 "data_offset": 2048, 00:20:02.325 "data_size": 63488 00:20:02.325 }, 00:20:02.325 { 00:20:02.325 "name": "BaseBdev3", 00:20:02.325 "uuid": "39d7b377-d2d9-50b0-8b01-59b66f0cea6a", 00:20:02.325 "is_configured": true, 00:20:02.325 "data_offset": 2048, 00:20:02.325 "data_size": 63488 00:20:02.325 } 00:20:02.325 ] 00:20:02.325 }' 00:20:02.325 14:02:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.325 14:02:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.259 14:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:03.259 [2024-07-25 14:02:52.290205] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.259 [2024-07-25 14:02:52.290278] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.259 [2024-07-25 14:02:52.293523] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.259 [2024-07-25 14:02:52.293605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.259 [2024-07-25 14:02:52.293653] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.259 [2024-07-25 14:02:52.293666] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:20:03.259 0 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 127749 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 127749 ']' 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 127749 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127749 00:20:03.518 killing process with pid 127749 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127749' 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 127749 00:20:03.518 14:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 127749 00:20:03.518 [2024-07-25 14:02:52.339285] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.518 [2024-07-25 14:02:52.539090] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.FXuNc8vX8n 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.42 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.42 != \0\.\0\0 ]] 00:20:04.891 00:20:04.891 real 0m8.790s 00:20:04.891 user 0m13.682s 00:20:04.891 sys 0m0.957s 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:04.891 ************************************ 00:20:04.891 END TEST raid_write_error_test 00:20:04.891 ************************************ 00:20:04.891 14:02:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.891 14:02:53 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:20:04.891 14:02:53 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:04.891 14:02:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:04.891 14:02:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:04.891 14:02:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.891 ************************************ 00:20:04.891 START TEST raid_state_function_test 00:20:04.891 ************************************ 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:04.891 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=127959 00:20:04.892 Process raid pid: 127959 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127959' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 127959 /var/tmp/spdk-raid.sock 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 127959 ']' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.892 14:02:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.892 [2024-07-25 14:02:53.887928] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:20:04.892 [2024-07-25 14:02:53.888116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.149 [2024-07-25 14:02:54.048103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.407 [2024-07-25 14:02:54.279831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.753 [2024-07-25 14:02:54.484445] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.011 14:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.011 14:02:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:06.011 14:02:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:06.269 [2024-07-25 14:02:55.095600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.269 [2024-07-25 14:02:55.095728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.269 [2024-07-25 14:02:55.095745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.269 [2024-07-25 14:02:55.095783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.269 [2024-07-25 14:02:55.095794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:06.269 [2024-07-25 14:02:55.095811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.269 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.526 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.526 "name": "Existed_Raid", 00:20:06.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.526 "strip_size_kb": 64, 00:20:06.526 "state": "configuring", 00:20:06.526 "raid_level": "concat", 00:20:06.526 "superblock": false, 00:20:06.526 "num_base_bdevs": 3, 00:20:06.526 "num_base_bdevs_discovered": 0, 00:20:06.526 "num_base_bdevs_operational": 3, 00:20:06.526 "base_bdevs_list": [ 00:20:06.526 { 00:20:06.526 "name": "BaseBdev1", 00:20:06.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.526 "is_configured": false, 00:20:06.526 "data_offset": 0, 00:20:06.526 "data_size": 0 00:20:06.526 }, 00:20:06.526 { 00:20:06.526 "name": "BaseBdev2", 00:20:06.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.526 "is_configured": false, 00:20:06.526 "data_offset": 0, 00:20:06.526 "data_size": 0 00:20:06.526 }, 00:20:06.526 { 00:20:06.526 "name": "BaseBdev3", 00:20:06.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.526 "is_configured": false, 00:20:06.526 "data_offset": 0, 00:20:06.526 "data_size": 0 00:20:06.526 } 00:20:06.526 ] 00:20:06.526 }' 00:20:06.526 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.526 14:02:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.092 14:02:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:07.349 [2024-07-25 14:02:56.271651] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.349 [2024-07-25 14:02:56.271720] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:20:07.349 14:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:07.606 [2024-07-25 14:02:56.559708] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.606 [2024-07-25 14:02:56.559847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.606 [2024-07-25 14:02:56.559863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.606 [2024-07-25 14:02:56.559886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.606 [2024-07-25 14:02:56.559895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.607 [2024-07-25 14:02:56.559932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.607 14:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:07.865 [2024-07-25 14:02:56.873481] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.865 BaseBdev1 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:07.865 14:02:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.123 14:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:08.382 [ 00:20:08.382 { 00:20:08.382 "name": "BaseBdev1", 00:20:08.382 "aliases": [ 00:20:08.382 "78551cec-3245-4388-95b0-4ea779c08914" 00:20:08.382 ], 00:20:08.382 "product_name": "Malloc disk", 00:20:08.382 "block_size": 512, 00:20:08.382 "num_blocks": 65536, 00:20:08.382 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:08.382 "assigned_rate_limits": { 00:20:08.382 "rw_ios_per_sec": 0, 00:20:08.382 "rw_mbytes_per_sec": 0, 00:20:08.382 "r_mbytes_per_sec": 0, 00:20:08.382 "w_mbytes_per_sec": 0 00:20:08.382 }, 00:20:08.382 "claimed": true, 00:20:08.382 "claim_type": "exclusive_write", 00:20:08.382 "zoned": false, 00:20:08.382 "supported_io_types": { 00:20:08.382 "read": true, 00:20:08.382 "write": true, 00:20:08.382 "unmap": true, 00:20:08.382 "flush": true, 00:20:08.382 "reset": true, 00:20:08.382 "nvme_admin": false, 00:20:08.382 "nvme_io": false, 00:20:08.382 "nvme_io_md": false, 00:20:08.382 "write_zeroes": true, 00:20:08.382 "zcopy": true, 00:20:08.382 "get_zone_info": false, 00:20:08.382 "zone_management": false, 00:20:08.382 "zone_append": false, 00:20:08.382 "compare": false, 00:20:08.382 "compare_and_write": false, 00:20:08.382 "abort": true, 00:20:08.382 "seek_hole": false, 00:20:08.382 "seek_data": false, 00:20:08.382 "copy": true, 00:20:08.382 "nvme_iov_md": false 00:20:08.382 }, 00:20:08.382 "memory_domains": [ 00:20:08.382 { 00:20:08.382 "dma_device_id": "system", 00:20:08.382 "dma_device_type": 1 00:20:08.382 }, 00:20:08.382 { 00:20:08.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.382 "dma_device_type": 2 00:20:08.382 } 00:20:08.382 ], 00:20:08.382 "driver_specific": {} 00:20:08.382 } 00:20:08.382 ] 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.382 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.640 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.640 "name": "Existed_Raid", 00:20:08.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.640 "strip_size_kb": 64, 00:20:08.640 "state": "configuring", 00:20:08.640 "raid_level": "concat", 00:20:08.640 "superblock": false, 00:20:08.640 "num_base_bdevs": 3, 00:20:08.640 "num_base_bdevs_discovered": 1, 00:20:08.640 "num_base_bdevs_operational": 3, 00:20:08.640 "base_bdevs_list": [ 00:20:08.640 { 00:20:08.640 "name": "BaseBdev1", 00:20:08.640 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:08.640 "is_configured": true, 00:20:08.640 "data_offset": 0, 00:20:08.640 "data_size": 65536 00:20:08.640 }, 00:20:08.640 { 00:20:08.640 "name": "BaseBdev2", 00:20:08.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.640 "is_configured": false, 00:20:08.640 "data_offset": 0, 00:20:08.640 "data_size": 0 00:20:08.640 }, 00:20:08.640 { 00:20:08.640 "name": "BaseBdev3", 00:20:08.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.640 "is_configured": false, 00:20:08.640 "data_offset": 0, 00:20:08.640 "data_size": 0 00:20:08.640 } 00:20:08.640 ] 00:20:08.640 }' 00:20:08.640 14:02:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.640 14:02:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.598 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:09.598 [2024-07-25 14:02:58.597934] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.598 [2024-07-25 14:02:58.598022] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:20:09.598 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:09.857 [2024-07-25 14:02:58.881979] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.857 [2024-07-25 14:02:58.884223] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.857 [2024-07-25 14:02:58.884320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.857 [2024-07-25 14:02:58.884337] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.857 [2024-07-25 14:02:58.884383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.857 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.115 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.115 14:02:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.374 14:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.374 "name": "Existed_Raid", 00:20:10.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.374 "strip_size_kb": 64, 00:20:10.374 "state": "configuring", 00:20:10.374 "raid_level": "concat", 00:20:10.374 "superblock": false, 00:20:10.374 "num_base_bdevs": 3, 00:20:10.374 "num_base_bdevs_discovered": 1, 00:20:10.374 "num_base_bdevs_operational": 3, 00:20:10.374 "base_bdevs_list": [ 00:20:10.374 { 00:20:10.374 "name": "BaseBdev1", 00:20:10.374 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:10.374 "is_configured": true, 00:20:10.374 "data_offset": 0, 00:20:10.374 "data_size": 65536 00:20:10.374 }, 00:20:10.374 { 00:20:10.374 "name": "BaseBdev2", 00:20:10.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.374 "is_configured": false, 00:20:10.374 "data_offset": 0, 00:20:10.374 "data_size": 0 00:20:10.374 }, 00:20:10.374 { 00:20:10.374 "name": "BaseBdev3", 00:20:10.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.374 "is_configured": false, 00:20:10.374 "data_offset": 0, 00:20:10.374 "data_size": 0 00:20:10.374 } 00:20:10.374 ] 00:20:10.374 }' 00:20:10.374 14:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.374 14:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.940 14:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.198 [2024-07-25 14:03:00.078970] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.198 BaseBdev2 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:11.198 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.457 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:11.715 [ 00:20:11.715 { 00:20:11.715 "name": "BaseBdev2", 00:20:11.715 "aliases": [ 00:20:11.715 "90f92188-df8a-44c7-a48b-8e54ec5b1b07" 00:20:11.715 ], 00:20:11.715 "product_name": "Malloc disk", 00:20:11.715 "block_size": 512, 00:20:11.715 "num_blocks": 65536, 00:20:11.715 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:11.715 "assigned_rate_limits": { 00:20:11.715 "rw_ios_per_sec": 0, 00:20:11.715 "rw_mbytes_per_sec": 0, 00:20:11.715 "r_mbytes_per_sec": 0, 00:20:11.715 "w_mbytes_per_sec": 0 00:20:11.715 }, 00:20:11.715 "claimed": true, 00:20:11.715 "claim_type": "exclusive_write", 00:20:11.715 "zoned": false, 00:20:11.715 "supported_io_types": { 00:20:11.715 "read": true, 00:20:11.715 "write": true, 00:20:11.715 "unmap": true, 00:20:11.715 "flush": true, 00:20:11.715 "reset": true, 00:20:11.715 "nvme_admin": false, 00:20:11.715 "nvme_io": false, 00:20:11.715 "nvme_io_md": false, 00:20:11.715 "write_zeroes": true, 00:20:11.715 "zcopy": true, 00:20:11.715 "get_zone_info": false, 00:20:11.715 "zone_management": false, 00:20:11.715 "zone_append": false, 00:20:11.715 "compare": false, 00:20:11.715 "compare_and_write": false, 00:20:11.715 "abort": true, 00:20:11.715 "seek_hole": false, 00:20:11.715 "seek_data": false, 00:20:11.715 "copy": true, 00:20:11.715 "nvme_iov_md": false 00:20:11.715 }, 00:20:11.715 "memory_domains": [ 00:20:11.715 { 00:20:11.715 "dma_device_id": "system", 00:20:11.715 "dma_device_type": 1 00:20:11.715 }, 00:20:11.715 { 00:20:11.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.715 "dma_device_type": 2 00:20:11.715 } 00:20:11.715 ], 00:20:11.715 "driver_specific": {} 00:20:11.715 } 00:20:11.715 ] 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.715 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.716 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.974 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.974 "name": "Existed_Raid", 00:20:11.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.974 "strip_size_kb": 64, 00:20:11.974 "state": "configuring", 00:20:11.974 "raid_level": "concat", 00:20:11.974 "superblock": false, 00:20:11.974 "num_base_bdevs": 3, 00:20:11.974 "num_base_bdevs_discovered": 2, 00:20:11.974 "num_base_bdevs_operational": 3, 00:20:11.974 "base_bdevs_list": [ 00:20:11.974 { 00:20:11.974 "name": "BaseBdev1", 00:20:11.974 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:11.974 "is_configured": true, 00:20:11.974 "data_offset": 0, 00:20:11.974 "data_size": 65536 00:20:11.974 }, 00:20:11.974 { 00:20:11.974 "name": "BaseBdev2", 00:20:11.974 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:11.974 "is_configured": true, 00:20:11.974 "data_offset": 0, 00:20:11.974 "data_size": 65536 00:20:11.974 }, 00:20:11.974 { 00:20:11.974 "name": "BaseBdev3", 00:20:11.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.974 "is_configured": false, 00:20:11.974 "data_offset": 0, 00:20:11.974 "data_size": 0 00:20:11.974 } 00:20:11.974 ] 00:20:11.974 }' 00:20:11.974 14:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.974 14:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.539 14:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:12.797 [2024-07-25 14:03:01.761772] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.797 [2024-07-25 14:03:01.761874] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:20:12.797 [2024-07-25 14:03:01.761887] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:12.797 [2024-07-25 14:03:01.762021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:12.797 [2024-07-25 14:03:01.762451] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:20:12.797 [2024-07-25 14:03:01.762468] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:20:12.797 [2024-07-25 14:03:01.762740] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.797 BaseBdev3 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:12.797 14:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.364 [ 00:20:13.364 { 00:20:13.364 "name": "BaseBdev3", 00:20:13.364 "aliases": [ 00:20:13.364 "43fb920f-ee3b-47a9-949d-07d4ec22d3ac" 00:20:13.364 ], 00:20:13.364 "product_name": "Malloc disk", 00:20:13.364 "block_size": 512, 00:20:13.364 "num_blocks": 65536, 00:20:13.364 "uuid": "43fb920f-ee3b-47a9-949d-07d4ec22d3ac", 00:20:13.364 "assigned_rate_limits": { 00:20:13.364 "rw_ios_per_sec": 0, 00:20:13.364 "rw_mbytes_per_sec": 0, 00:20:13.364 "r_mbytes_per_sec": 0, 00:20:13.364 "w_mbytes_per_sec": 0 00:20:13.364 }, 00:20:13.364 "claimed": true, 00:20:13.364 "claim_type": "exclusive_write", 00:20:13.364 "zoned": false, 00:20:13.364 "supported_io_types": { 00:20:13.364 "read": true, 00:20:13.364 "write": true, 00:20:13.364 "unmap": true, 00:20:13.364 "flush": true, 00:20:13.364 "reset": true, 00:20:13.364 "nvme_admin": false, 00:20:13.364 "nvme_io": false, 00:20:13.364 "nvme_io_md": false, 00:20:13.364 "write_zeroes": true, 00:20:13.364 "zcopy": true, 00:20:13.364 "get_zone_info": false, 00:20:13.364 "zone_management": false, 00:20:13.364 "zone_append": false, 00:20:13.364 "compare": false, 00:20:13.364 "compare_and_write": false, 00:20:13.364 "abort": true, 00:20:13.364 "seek_hole": false, 00:20:13.364 "seek_data": false, 00:20:13.364 "copy": true, 00:20:13.364 "nvme_iov_md": false 00:20:13.364 }, 00:20:13.364 "memory_domains": [ 00:20:13.364 { 00:20:13.364 "dma_device_id": "system", 00:20:13.364 "dma_device_type": 1 00:20:13.364 }, 00:20:13.364 { 00:20:13.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.364 "dma_device_type": 2 00:20:13.364 } 00:20:13.364 ], 00:20:13.364 "driver_specific": {} 00:20:13.364 } 00:20:13.364 ] 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.364 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.623 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.623 "name": "Existed_Raid", 00:20:13.623 "uuid": "7fd51c16-8c0b-42c0-b6a6-9fcafa61f67e", 00:20:13.623 "strip_size_kb": 64, 00:20:13.623 "state": "online", 00:20:13.623 "raid_level": "concat", 00:20:13.623 "superblock": false, 00:20:13.623 "num_base_bdevs": 3, 00:20:13.623 "num_base_bdevs_discovered": 3, 00:20:13.623 "num_base_bdevs_operational": 3, 00:20:13.623 "base_bdevs_list": [ 00:20:13.623 { 00:20:13.623 "name": "BaseBdev1", 00:20:13.623 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 }, 00:20:13.623 { 00:20:13.623 "name": "BaseBdev2", 00:20:13.623 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 }, 00:20:13.623 { 00:20:13.623 "name": "BaseBdev3", 00:20:13.623 "uuid": "43fb920f-ee3b-47a9-949d-07d4ec22d3ac", 00:20:13.623 "is_configured": true, 00:20:13.623 "data_offset": 0, 00:20:13.623 "data_size": 65536 00:20:13.623 } 00:20:13.623 ] 00:20:13.623 }' 00:20:13.623 14:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.623 14:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:14.560 [2024-07-25 14:03:03.459049] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.560 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:14.560 "name": "Existed_Raid", 00:20:14.560 "aliases": [ 00:20:14.560 "7fd51c16-8c0b-42c0-b6a6-9fcafa61f67e" 00:20:14.560 ], 00:20:14.560 "product_name": "Raid Volume", 00:20:14.560 "block_size": 512, 00:20:14.560 "num_blocks": 196608, 00:20:14.560 "uuid": "7fd51c16-8c0b-42c0-b6a6-9fcafa61f67e", 00:20:14.560 "assigned_rate_limits": { 00:20:14.560 "rw_ios_per_sec": 0, 00:20:14.560 "rw_mbytes_per_sec": 0, 00:20:14.560 "r_mbytes_per_sec": 0, 00:20:14.560 "w_mbytes_per_sec": 0 00:20:14.560 }, 00:20:14.560 "claimed": false, 00:20:14.560 "zoned": false, 00:20:14.560 "supported_io_types": { 00:20:14.560 "read": true, 00:20:14.560 "write": true, 00:20:14.560 "unmap": true, 00:20:14.560 "flush": true, 00:20:14.560 "reset": true, 00:20:14.560 "nvme_admin": false, 00:20:14.560 "nvme_io": false, 00:20:14.560 "nvme_io_md": false, 00:20:14.560 "write_zeroes": true, 00:20:14.560 "zcopy": false, 00:20:14.560 "get_zone_info": false, 00:20:14.560 "zone_management": false, 00:20:14.560 "zone_append": false, 00:20:14.560 "compare": false, 00:20:14.560 "compare_and_write": false, 00:20:14.560 "abort": false, 00:20:14.560 "seek_hole": false, 00:20:14.560 "seek_data": false, 00:20:14.560 "copy": false, 00:20:14.560 "nvme_iov_md": false 00:20:14.560 }, 00:20:14.560 "memory_domains": [ 00:20:14.560 { 00:20:14.560 "dma_device_id": "system", 00:20:14.560 "dma_device_type": 1 00:20:14.560 }, 00:20:14.560 { 00:20:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.560 "dma_device_type": 2 00:20:14.560 }, 00:20:14.560 { 00:20:14.560 "dma_device_id": "system", 00:20:14.560 "dma_device_type": 1 00:20:14.560 }, 00:20:14.560 { 00:20:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.560 "dma_device_type": 2 00:20:14.560 }, 00:20:14.560 { 00:20:14.560 "dma_device_id": "system", 00:20:14.560 "dma_device_type": 1 00:20:14.560 }, 00:20:14.560 { 00:20:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.560 "dma_device_type": 2 00:20:14.560 } 00:20:14.560 ], 00:20:14.560 "driver_specific": { 00:20:14.560 "raid": { 00:20:14.560 "uuid": "7fd51c16-8c0b-42c0-b6a6-9fcafa61f67e", 00:20:14.560 "strip_size_kb": 64, 00:20:14.560 "state": "online", 00:20:14.561 "raid_level": "concat", 00:20:14.561 "superblock": false, 00:20:14.561 "num_base_bdevs": 3, 00:20:14.561 "num_base_bdevs_discovered": 3, 00:20:14.561 "num_base_bdevs_operational": 3, 00:20:14.561 "base_bdevs_list": [ 00:20:14.561 { 00:20:14.561 "name": "BaseBdev1", 00:20:14.561 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:14.561 "is_configured": true, 00:20:14.561 "data_offset": 0, 00:20:14.561 "data_size": 65536 00:20:14.561 }, 00:20:14.561 { 00:20:14.561 "name": "BaseBdev2", 00:20:14.561 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:14.561 "is_configured": true, 00:20:14.561 "data_offset": 0, 00:20:14.561 "data_size": 65536 00:20:14.561 }, 00:20:14.561 { 00:20:14.561 "name": "BaseBdev3", 00:20:14.561 "uuid": "43fb920f-ee3b-47a9-949d-07d4ec22d3ac", 00:20:14.561 "is_configured": true, 00:20:14.561 "data_offset": 0, 00:20:14.561 "data_size": 65536 00:20:14.561 } 00:20:14.561 ] 00:20:14.561 } 00:20:14.561 } 00:20:14.561 }' 00:20:14.561 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:14.561 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:14.561 BaseBdev2 00:20:14.561 BaseBdev3' 00:20:14.561 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:14.561 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:14.561 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:14.820 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:14.820 "name": "BaseBdev1", 00:20:14.820 "aliases": [ 00:20:14.820 "78551cec-3245-4388-95b0-4ea779c08914" 00:20:14.820 ], 00:20:14.820 "product_name": "Malloc disk", 00:20:14.820 "block_size": 512, 00:20:14.820 "num_blocks": 65536, 00:20:14.820 "uuid": "78551cec-3245-4388-95b0-4ea779c08914", 00:20:14.820 "assigned_rate_limits": { 00:20:14.820 "rw_ios_per_sec": 0, 00:20:14.820 "rw_mbytes_per_sec": 0, 00:20:14.820 "r_mbytes_per_sec": 0, 00:20:14.820 "w_mbytes_per_sec": 0 00:20:14.820 }, 00:20:14.820 "claimed": true, 00:20:14.820 "claim_type": "exclusive_write", 00:20:14.820 "zoned": false, 00:20:14.820 "supported_io_types": { 00:20:14.820 "read": true, 00:20:14.820 "write": true, 00:20:14.820 "unmap": true, 00:20:14.820 "flush": true, 00:20:14.820 "reset": true, 00:20:14.820 "nvme_admin": false, 00:20:14.820 "nvme_io": false, 00:20:14.820 "nvme_io_md": false, 00:20:14.820 "write_zeroes": true, 00:20:14.820 "zcopy": true, 00:20:14.820 "get_zone_info": false, 00:20:14.820 "zone_management": false, 00:20:14.820 "zone_append": false, 00:20:14.820 "compare": false, 00:20:14.820 "compare_and_write": false, 00:20:14.820 "abort": true, 00:20:14.820 "seek_hole": false, 00:20:14.820 "seek_data": false, 00:20:14.820 "copy": true, 00:20:14.820 "nvme_iov_md": false 00:20:14.820 }, 00:20:14.820 "memory_domains": [ 00:20:14.820 { 00:20:14.820 "dma_device_id": "system", 00:20:14.820 "dma_device_type": 1 00:20:14.820 }, 00:20:14.820 { 00:20:14.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.820 "dma_device_type": 2 00:20:14.820 } 00:20:14.820 ], 00:20:14.820 "driver_specific": {} 00:20:14.820 }' 00:20:14.820 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:14.820 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:14.820 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:14.820 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.079 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.079 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:15.079 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.079 14:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.079 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:15.079 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.337 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.337 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:15.337 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.337 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:15.337 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:15.602 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:15.602 "name": "BaseBdev2", 00:20:15.602 "aliases": [ 00:20:15.602 "90f92188-df8a-44c7-a48b-8e54ec5b1b07" 00:20:15.602 ], 00:20:15.602 "product_name": "Malloc disk", 00:20:15.603 "block_size": 512, 00:20:15.603 "num_blocks": 65536, 00:20:15.603 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:15.603 "assigned_rate_limits": { 00:20:15.603 "rw_ios_per_sec": 0, 00:20:15.603 "rw_mbytes_per_sec": 0, 00:20:15.603 "r_mbytes_per_sec": 0, 00:20:15.603 "w_mbytes_per_sec": 0 00:20:15.603 }, 00:20:15.603 "claimed": true, 00:20:15.603 "claim_type": "exclusive_write", 00:20:15.603 "zoned": false, 00:20:15.603 "supported_io_types": { 00:20:15.603 "read": true, 00:20:15.603 "write": true, 00:20:15.603 "unmap": true, 00:20:15.603 "flush": true, 00:20:15.603 "reset": true, 00:20:15.603 "nvme_admin": false, 00:20:15.603 "nvme_io": false, 00:20:15.603 "nvme_io_md": false, 00:20:15.603 "write_zeroes": true, 00:20:15.603 "zcopy": true, 00:20:15.603 "get_zone_info": false, 00:20:15.603 "zone_management": false, 00:20:15.603 "zone_append": false, 00:20:15.603 "compare": false, 00:20:15.603 "compare_and_write": false, 00:20:15.603 "abort": true, 00:20:15.603 "seek_hole": false, 00:20:15.603 "seek_data": false, 00:20:15.603 "copy": true, 00:20:15.603 "nvme_iov_md": false 00:20:15.603 }, 00:20:15.603 "memory_domains": [ 00:20:15.603 { 00:20:15.603 "dma_device_id": "system", 00:20:15.603 "dma_device_type": 1 00:20:15.603 }, 00:20:15.603 { 00:20:15.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.603 "dma_device_type": 2 00:20:15.603 } 00:20:15.603 ], 00:20:15.603 "driver_specific": {} 00:20:15.603 }' 00:20:15.603 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.603 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:15.603 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:15.603 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.603 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:15.860 14:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:16.118 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:16.118 "name": "BaseBdev3", 00:20:16.118 "aliases": [ 00:20:16.118 "43fb920f-ee3b-47a9-949d-07d4ec22d3ac" 00:20:16.118 ], 00:20:16.118 "product_name": "Malloc disk", 00:20:16.118 "block_size": 512, 00:20:16.118 "num_blocks": 65536, 00:20:16.118 "uuid": "43fb920f-ee3b-47a9-949d-07d4ec22d3ac", 00:20:16.118 "assigned_rate_limits": { 00:20:16.118 "rw_ios_per_sec": 0, 00:20:16.118 "rw_mbytes_per_sec": 0, 00:20:16.118 "r_mbytes_per_sec": 0, 00:20:16.118 "w_mbytes_per_sec": 0 00:20:16.118 }, 00:20:16.118 "claimed": true, 00:20:16.118 "claim_type": "exclusive_write", 00:20:16.118 "zoned": false, 00:20:16.118 "supported_io_types": { 00:20:16.118 "read": true, 00:20:16.118 "write": true, 00:20:16.118 "unmap": true, 00:20:16.118 "flush": true, 00:20:16.118 "reset": true, 00:20:16.118 "nvme_admin": false, 00:20:16.118 "nvme_io": false, 00:20:16.118 "nvme_io_md": false, 00:20:16.118 "write_zeroes": true, 00:20:16.118 "zcopy": true, 00:20:16.118 "get_zone_info": false, 00:20:16.118 "zone_management": false, 00:20:16.118 "zone_append": false, 00:20:16.118 "compare": false, 00:20:16.118 "compare_and_write": false, 00:20:16.118 "abort": true, 00:20:16.118 "seek_hole": false, 00:20:16.118 "seek_data": false, 00:20:16.118 "copy": true, 00:20:16.118 "nvme_iov_md": false 00:20:16.118 }, 00:20:16.118 "memory_domains": [ 00:20:16.118 { 00:20:16.118 "dma_device_id": "system", 00:20:16.118 "dma_device_type": 1 00:20:16.118 }, 00:20:16.118 { 00:20:16.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.118 "dma_device_type": 2 00:20:16.118 } 00:20:16.118 ], 00:20:16.118 "driver_specific": {} 00:20:16.118 }' 00:20:16.118 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.376 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.377 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.635 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:16.635 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.635 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:16.893 [2024-07-25 14:03:05.739185] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.893 [2024-07-25 14:03:05.739238] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.893 [2024-07-25 14:03:05.739295] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.893 14:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.152 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.152 "name": "Existed_Raid", 00:20:17.152 "uuid": "7fd51c16-8c0b-42c0-b6a6-9fcafa61f67e", 00:20:17.152 "strip_size_kb": 64, 00:20:17.152 "state": "offline", 00:20:17.152 "raid_level": "concat", 00:20:17.152 "superblock": false, 00:20:17.152 "num_base_bdevs": 3, 00:20:17.152 "num_base_bdevs_discovered": 2, 00:20:17.152 "num_base_bdevs_operational": 2, 00:20:17.152 "base_bdevs_list": [ 00:20:17.152 { 00:20:17.152 "name": null, 00:20:17.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.152 "is_configured": false, 00:20:17.152 "data_offset": 0, 00:20:17.152 "data_size": 65536 00:20:17.152 }, 00:20:17.152 { 00:20:17.152 "name": "BaseBdev2", 00:20:17.152 "uuid": "90f92188-df8a-44c7-a48b-8e54ec5b1b07", 00:20:17.152 "is_configured": true, 00:20:17.152 "data_offset": 0, 00:20:17.152 "data_size": 65536 00:20:17.152 }, 00:20:17.152 { 00:20:17.152 "name": "BaseBdev3", 00:20:17.152 "uuid": "43fb920f-ee3b-47a9-949d-07d4ec22d3ac", 00:20:17.152 "is_configured": true, 00:20:17.152 "data_offset": 0, 00:20:17.152 "data_size": 65536 00:20:17.152 } 00:20:17.152 ] 00:20:17.152 }' 00:20:17.152 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.152 14:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.087 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:18.087 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:18.087 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.087 14:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:18.087 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:18.087 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:18.087 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:18.346 [2024-07-25 14:03:07.236470] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:18.346 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:18.346 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:18.346 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:18.346 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.604 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:18.604 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:18.604 14:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:19.170 [2024-07-25 14:03:07.909035] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.170 [2024-07-25 14:03:07.909123] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:20:19.170 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:19.170 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:19.170 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.170 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:19.428 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:19.686 BaseBdev2 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:19.686 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.944 14:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:20.202 [ 00:20:20.202 { 00:20:20.202 "name": "BaseBdev2", 00:20:20.202 "aliases": [ 00:20:20.202 "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6" 00:20:20.202 ], 00:20:20.202 "product_name": "Malloc disk", 00:20:20.202 "block_size": 512, 00:20:20.202 "num_blocks": 65536, 00:20:20.202 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:20.202 "assigned_rate_limits": { 00:20:20.202 "rw_ios_per_sec": 0, 00:20:20.202 "rw_mbytes_per_sec": 0, 00:20:20.202 "r_mbytes_per_sec": 0, 00:20:20.202 "w_mbytes_per_sec": 0 00:20:20.202 }, 00:20:20.202 "claimed": false, 00:20:20.202 "zoned": false, 00:20:20.202 "supported_io_types": { 00:20:20.202 "read": true, 00:20:20.202 "write": true, 00:20:20.202 "unmap": true, 00:20:20.202 "flush": true, 00:20:20.202 "reset": true, 00:20:20.202 "nvme_admin": false, 00:20:20.202 "nvme_io": false, 00:20:20.202 "nvme_io_md": false, 00:20:20.202 "write_zeroes": true, 00:20:20.202 "zcopy": true, 00:20:20.202 "get_zone_info": false, 00:20:20.202 "zone_management": false, 00:20:20.202 "zone_append": false, 00:20:20.202 "compare": false, 00:20:20.202 "compare_and_write": false, 00:20:20.202 "abort": true, 00:20:20.203 "seek_hole": false, 00:20:20.203 "seek_data": false, 00:20:20.203 "copy": true, 00:20:20.203 "nvme_iov_md": false 00:20:20.203 }, 00:20:20.203 "memory_domains": [ 00:20:20.203 { 00:20:20.203 "dma_device_id": "system", 00:20:20.203 "dma_device_type": 1 00:20:20.203 }, 00:20:20.203 { 00:20:20.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.203 "dma_device_type": 2 00:20:20.203 } 00:20:20.203 ], 00:20:20.203 "driver_specific": {} 00:20:20.203 } 00:20:20.203 ] 00:20:20.203 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:20.203 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:20.203 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:20.203 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:20.461 BaseBdev3 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:20.461 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:20.719 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:20.977 [ 00:20:20.977 { 00:20:20.977 "name": "BaseBdev3", 00:20:20.977 "aliases": [ 00:20:20.977 "027a12c4-97fa-492f-874f-c91d8a2e7a90" 00:20:20.977 ], 00:20:20.977 "product_name": "Malloc disk", 00:20:20.977 "block_size": 512, 00:20:20.977 "num_blocks": 65536, 00:20:20.977 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:20.977 "assigned_rate_limits": { 00:20:20.977 "rw_ios_per_sec": 0, 00:20:20.977 "rw_mbytes_per_sec": 0, 00:20:20.977 "r_mbytes_per_sec": 0, 00:20:20.977 "w_mbytes_per_sec": 0 00:20:20.977 }, 00:20:20.977 "claimed": false, 00:20:20.977 "zoned": false, 00:20:20.977 "supported_io_types": { 00:20:20.977 "read": true, 00:20:20.977 "write": true, 00:20:20.977 "unmap": true, 00:20:20.977 "flush": true, 00:20:20.977 "reset": true, 00:20:20.977 "nvme_admin": false, 00:20:20.977 "nvme_io": false, 00:20:20.977 "nvme_io_md": false, 00:20:20.977 "write_zeroes": true, 00:20:20.977 "zcopy": true, 00:20:20.977 "get_zone_info": false, 00:20:20.977 "zone_management": false, 00:20:20.977 "zone_append": false, 00:20:20.977 "compare": false, 00:20:20.977 "compare_and_write": false, 00:20:20.977 "abort": true, 00:20:20.977 "seek_hole": false, 00:20:20.977 "seek_data": false, 00:20:20.977 "copy": true, 00:20:20.977 "nvme_iov_md": false 00:20:20.977 }, 00:20:20.977 "memory_domains": [ 00:20:20.977 { 00:20:20.977 "dma_device_id": "system", 00:20:20.977 "dma_device_type": 1 00:20:20.977 }, 00:20:20.977 { 00:20:20.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.977 "dma_device_type": 2 00:20:20.977 } 00:20:20.977 ], 00:20:20.977 "driver_specific": {} 00:20:20.977 } 00:20:20.977 ] 00:20:20.977 14:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:20.977 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:20.977 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:20.977 14:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:21.236 [2024-07-25 14:03:10.097627] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.236 [2024-07-25 14:03:10.097747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.236 [2024-07-25 14:03:10.097823] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.236 [2024-07-25 14:03:10.099982] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.236 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.494 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.494 "name": "Existed_Raid", 00:20:21.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.494 "strip_size_kb": 64, 00:20:21.494 "state": "configuring", 00:20:21.494 "raid_level": "concat", 00:20:21.494 "superblock": false, 00:20:21.494 "num_base_bdevs": 3, 00:20:21.494 "num_base_bdevs_discovered": 2, 00:20:21.494 "num_base_bdevs_operational": 3, 00:20:21.494 "base_bdevs_list": [ 00:20:21.494 { 00:20:21.494 "name": "BaseBdev1", 00:20:21.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.494 "is_configured": false, 00:20:21.494 "data_offset": 0, 00:20:21.494 "data_size": 0 00:20:21.494 }, 00:20:21.494 { 00:20:21.494 "name": "BaseBdev2", 00:20:21.494 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:21.494 "is_configured": true, 00:20:21.494 "data_offset": 0, 00:20:21.494 "data_size": 65536 00:20:21.494 }, 00:20:21.494 { 00:20:21.494 "name": "BaseBdev3", 00:20:21.494 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:21.494 "is_configured": true, 00:20:21.494 "data_offset": 0, 00:20:21.494 "data_size": 65536 00:20:21.494 } 00:20:21.494 ] 00:20:21.494 }' 00:20:21.494 14:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.494 14:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.060 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:22.318 [2024-07-25 14:03:11.317904] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:22.318 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.319 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.577 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.577 "name": "Existed_Raid", 00:20:22.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.577 "strip_size_kb": 64, 00:20:22.577 "state": "configuring", 00:20:22.577 "raid_level": "concat", 00:20:22.577 "superblock": false, 00:20:22.577 "num_base_bdevs": 3, 00:20:22.577 "num_base_bdevs_discovered": 1, 00:20:22.577 "num_base_bdevs_operational": 3, 00:20:22.577 "base_bdevs_list": [ 00:20:22.577 { 00:20:22.577 "name": "BaseBdev1", 00:20:22.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.577 "is_configured": false, 00:20:22.577 "data_offset": 0, 00:20:22.577 "data_size": 0 00:20:22.577 }, 00:20:22.577 { 00:20:22.577 "name": null, 00:20:22.577 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:22.577 "is_configured": false, 00:20:22.577 "data_offset": 0, 00:20:22.577 "data_size": 65536 00:20:22.577 }, 00:20:22.577 { 00:20:22.577 "name": "BaseBdev3", 00:20:22.577 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:22.577 "is_configured": true, 00:20:22.577 "data_offset": 0, 00:20:22.577 "data_size": 65536 00:20:22.577 } 00:20:22.577 ] 00:20:22.577 }' 00:20:22.577 14:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.577 14:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.512 14:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.512 14:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:23.512 14:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:23.512 14:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:23.770 [2024-07-25 14:03:12.749917] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.770 BaseBdev1 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:23.770 14:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.028 14:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:24.287 [ 00:20:24.287 { 00:20:24.287 "name": "BaseBdev1", 00:20:24.287 "aliases": [ 00:20:24.287 "8438c51a-856c-4a25-9720-6c9495f48333" 00:20:24.287 ], 00:20:24.287 "product_name": "Malloc disk", 00:20:24.287 "block_size": 512, 00:20:24.287 "num_blocks": 65536, 00:20:24.287 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:24.287 "assigned_rate_limits": { 00:20:24.287 "rw_ios_per_sec": 0, 00:20:24.287 "rw_mbytes_per_sec": 0, 00:20:24.287 "r_mbytes_per_sec": 0, 00:20:24.287 "w_mbytes_per_sec": 0 00:20:24.287 }, 00:20:24.287 "claimed": true, 00:20:24.287 "claim_type": "exclusive_write", 00:20:24.287 "zoned": false, 00:20:24.287 "supported_io_types": { 00:20:24.287 "read": true, 00:20:24.287 "write": true, 00:20:24.287 "unmap": true, 00:20:24.287 "flush": true, 00:20:24.287 "reset": true, 00:20:24.287 "nvme_admin": false, 00:20:24.287 "nvme_io": false, 00:20:24.287 "nvme_io_md": false, 00:20:24.287 "write_zeroes": true, 00:20:24.287 "zcopy": true, 00:20:24.287 "get_zone_info": false, 00:20:24.287 "zone_management": false, 00:20:24.287 "zone_append": false, 00:20:24.287 "compare": false, 00:20:24.287 "compare_and_write": false, 00:20:24.287 "abort": true, 00:20:24.287 "seek_hole": false, 00:20:24.287 "seek_data": false, 00:20:24.287 "copy": true, 00:20:24.287 "nvme_iov_md": false 00:20:24.287 }, 00:20:24.287 "memory_domains": [ 00:20:24.287 { 00:20:24.287 "dma_device_id": "system", 00:20:24.287 "dma_device_type": 1 00:20:24.287 }, 00:20:24.287 { 00:20:24.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.287 "dma_device_type": 2 00:20:24.287 } 00:20:24.287 ], 00:20:24.287 "driver_specific": {} 00:20:24.287 } 00:20:24.287 ] 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.545 "name": "Existed_Raid", 00:20:24.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.545 "strip_size_kb": 64, 00:20:24.545 "state": "configuring", 00:20:24.545 "raid_level": "concat", 00:20:24.545 "superblock": false, 00:20:24.545 "num_base_bdevs": 3, 00:20:24.545 "num_base_bdevs_discovered": 2, 00:20:24.545 "num_base_bdevs_operational": 3, 00:20:24.545 "base_bdevs_list": [ 00:20:24.545 { 00:20:24.545 "name": "BaseBdev1", 00:20:24.545 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:24.545 "is_configured": true, 00:20:24.545 "data_offset": 0, 00:20:24.545 "data_size": 65536 00:20:24.545 }, 00:20:24.545 { 00:20:24.545 "name": null, 00:20:24.545 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:24.545 "is_configured": false, 00:20:24.545 "data_offset": 0, 00:20:24.545 "data_size": 65536 00:20:24.545 }, 00:20:24.545 { 00:20:24.545 "name": "BaseBdev3", 00:20:24.545 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:24.545 "is_configured": true, 00:20:24.545 "data_offset": 0, 00:20:24.545 "data_size": 65536 00:20:24.545 } 00:20:24.545 ] 00:20:24.545 }' 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.545 14:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.483 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.483 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:25.483 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:25.483 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:25.783 [2024-07-25 14:03:14.782070] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.042 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.043 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.043 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.043 14:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.043 14:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.043 "name": "Existed_Raid", 00:20:26.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.043 "strip_size_kb": 64, 00:20:26.043 "state": "configuring", 00:20:26.043 "raid_level": "concat", 00:20:26.043 "superblock": false, 00:20:26.043 "num_base_bdevs": 3, 00:20:26.043 "num_base_bdevs_discovered": 1, 00:20:26.043 "num_base_bdevs_operational": 3, 00:20:26.043 "base_bdevs_list": [ 00:20:26.043 { 00:20:26.043 "name": "BaseBdev1", 00:20:26.043 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:26.043 "is_configured": true, 00:20:26.043 "data_offset": 0, 00:20:26.043 "data_size": 65536 00:20:26.043 }, 00:20:26.043 { 00:20:26.043 "name": null, 00:20:26.043 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:26.043 "is_configured": false, 00:20:26.043 "data_offset": 0, 00:20:26.043 "data_size": 65536 00:20:26.043 }, 00:20:26.043 { 00:20:26.043 "name": null, 00:20:26.043 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:26.043 "is_configured": false, 00:20:26.043 "data_offset": 0, 00:20:26.043 "data_size": 65536 00:20:26.043 } 00:20:26.043 ] 00:20:26.043 }' 00:20:26.043 14:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.043 14:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.977 14:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.977 14:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:27.235 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:27.235 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:27.494 [2024-07-25 14:03:16.402080] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.494 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.752 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.752 "name": "Existed_Raid", 00:20:27.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.752 "strip_size_kb": 64, 00:20:27.752 "state": "configuring", 00:20:27.752 "raid_level": "concat", 00:20:27.752 "superblock": false, 00:20:27.752 "num_base_bdevs": 3, 00:20:27.752 "num_base_bdevs_discovered": 2, 00:20:27.752 "num_base_bdevs_operational": 3, 00:20:27.752 "base_bdevs_list": [ 00:20:27.752 { 00:20:27.752 "name": "BaseBdev1", 00:20:27.752 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:27.752 "is_configured": true, 00:20:27.752 "data_offset": 0, 00:20:27.752 "data_size": 65536 00:20:27.752 }, 00:20:27.752 { 00:20:27.752 "name": null, 00:20:27.753 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:27.753 "is_configured": false, 00:20:27.753 "data_offset": 0, 00:20:27.753 "data_size": 65536 00:20:27.753 }, 00:20:27.753 { 00:20:27.753 "name": "BaseBdev3", 00:20:27.753 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:27.753 "is_configured": true, 00:20:27.753 "data_offset": 0, 00:20:27.753 "data_size": 65536 00:20:27.753 } 00:20:27.753 ] 00:20:27.753 }' 00:20:27.753 14:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.753 14:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.319 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.319 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:28.884 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:28.884 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:28.884 [2024-07-25 14:03:17.866405] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.142 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.143 14:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.401 14:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.401 "name": "Existed_Raid", 00:20:29.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.401 "strip_size_kb": 64, 00:20:29.401 "state": "configuring", 00:20:29.401 "raid_level": "concat", 00:20:29.401 "superblock": false, 00:20:29.401 "num_base_bdevs": 3, 00:20:29.401 "num_base_bdevs_discovered": 1, 00:20:29.401 "num_base_bdevs_operational": 3, 00:20:29.401 "base_bdevs_list": [ 00:20:29.401 { 00:20:29.401 "name": null, 00:20:29.401 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:29.401 "is_configured": false, 00:20:29.401 "data_offset": 0, 00:20:29.401 "data_size": 65536 00:20:29.401 }, 00:20:29.401 { 00:20:29.401 "name": null, 00:20:29.401 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:29.401 "is_configured": false, 00:20:29.401 "data_offset": 0, 00:20:29.401 "data_size": 65536 00:20:29.401 }, 00:20:29.401 { 00:20:29.401 "name": "BaseBdev3", 00:20:29.401 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:29.401 "is_configured": true, 00:20:29.401 "data_offset": 0, 00:20:29.401 "data_size": 65536 00:20:29.401 } 00:20:29.401 ] 00:20:29.401 }' 00:20:29.401 14:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.401 14:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.967 14:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.967 14:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:30.226 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:30.226 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:30.793 [2024-07-25 14:03:19.586817] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.793 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.794 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.794 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.794 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.794 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.052 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.052 "name": "Existed_Raid", 00:20:31.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.052 "strip_size_kb": 64, 00:20:31.052 "state": "configuring", 00:20:31.052 "raid_level": "concat", 00:20:31.052 "superblock": false, 00:20:31.052 "num_base_bdevs": 3, 00:20:31.052 "num_base_bdevs_discovered": 2, 00:20:31.052 "num_base_bdevs_operational": 3, 00:20:31.052 "base_bdevs_list": [ 00:20:31.052 { 00:20:31.052 "name": null, 00:20:31.052 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:31.052 "is_configured": false, 00:20:31.052 "data_offset": 0, 00:20:31.052 "data_size": 65536 00:20:31.052 }, 00:20:31.052 { 00:20:31.052 "name": "BaseBdev2", 00:20:31.052 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:31.052 "is_configured": true, 00:20:31.052 "data_offset": 0, 00:20:31.052 "data_size": 65536 00:20:31.052 }, 00:20:31.052 { 00:20:31.052 "name": "BaseBdev3", 00:20:31.052 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:31.052 "is_configured": true, 00:20:31.052 "data_offset": 0, 00:20:31.052 "data_size": 65536 00:20:31.052 } 00:20:31.052 ] 00:20:31.052 }' 00:20:31.052 14:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.052 14:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.617 14:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.617 14:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:31.875 14:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:31.875 14:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.875 14:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:32.132 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8438c51a-856c-4a25-9720-6c9495f48333 00:20:32.389 [2024-07-25 14:03:21.318087] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:32.389 [2024-07-25 14:03:21.318147] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:20:32.389 [2024-07-25 14:03:21.318158] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:32.389 [2024-07-25 14:03:21.318283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:32.389 [2024-07-25 14:03:21.318635] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:20:32.389 [2024-07-25 14:03:21.318660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:20:32.389 [2024-07-25 14:03:21.318931] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.389 NewBaseBdev 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:32.389 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.646 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:32.976 [ 00:20:32.976 { 00:20:32.976 "name": "NewBaseBdev", 00:20:32.976 "aliases": [ 00:20:32.976 "8438c51a-856c-4a25-9720-6c9495f48333" 00:20:32.976 ], 00:20:32.976 "product_name": "Malloc disk", 00:20:32.976 "block_size": 512, 00:20:32.976 "num_blocks": 65536, 00:20:32.976 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:32.976 "assigned_rate_limits": { 00:20:32.976 "rw_ios_per_sec": 0, 00:20:32.976 "rw_mbytes_per_sec": 0, 00:20:32.976 "r_mbytes_per_sec": 0, 00:20:32.976 "w_mbytes_per_sec": 0 00:20:32.976 }, 00:20:32.976 "claimed": true, 00:20:32.976 "claim_type": "exclusive_write", 00:20:32.976 "zoned": false, 00:20:32.976 "supported_io_types": { 00:20:32.976 "read": true, 00:20:32.976 "write": true, 00:20:32.976 "unmap": true, 00:20:32.976 "flush": true, 00:20:32.976 "reset": true, 00:20:32.976 "nvme_admin": false, 00:20:32.976 "nvme_io": false, 00:20:32.976 "nvme_io_md": false, 00:20:32.976 "write_zeroes": true, 00:20:32.976 "zcopy": true, 00:20:32.976 "get_zone_info": false, 00:20:32.976 "zone_management": false, 00:20:32.976 "zone_append": false, 00:20:32.976 "compare": false, 00:20:32.976 "compare_and_write": false, 00:20:32.976 "abort": true, 00:20:32.976 "seek_hole": false, 00:20:32.976 "seek_data": false, 00:20:32.976 "copy": true, 00:20:32.976 "nvme_iov_md": false 00:20:32.976 }, 00:20:32.976 "memory_domains": [ 00:20:32.976 { 00:20:32.976 "dma_device_id": "system", 00:20:32.976 "dma_device_type": 1 00:20:32.976 }, 00:20:32.976 { 00:20:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.976 "dma_device_type": 2 00:20:32.976 } 00:20:32.976 ], 00:20:32.976 "driver_specific": {} 00:20:32.976 } 00:20:32.976 ] 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.976 14:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.235 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.235 "name": "Existed_Raid", 00:20:33.235 "uuid": "06be4710-4dd4-42ac-bc9e-24f63e6ca30d", 00:20:33.235 "strip_size_kb": 64, 00:20:33.235 "state": "online", 00:20:33.235 "raid_level": "concat", 00:20:33.235 "superblock": false, 00:20:33.235 "num_base_bdevs": 3, 00:20:33.235 "num_base_bdevs_discovered": 3, 00:20:33.235 "num_base_bdevs_operational": 3, 00:20:33.235 "base_bdevs_list": [ 00:20:33.235 { 00:20:33.235 "name": "NewBaseBdev", 00:20:33.235 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:33.235 "is_configured": true, 00:20:33.235 "data_offset": 0, 00:20:33.235 "data_size": 65536 00:20:33.235 }, 00:20:33.235 { 00:20:33.235 "name": "BaseBdev2", 00:20:33.235 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:33.235 "is_configured": true, 00:20:33.235 "data_offset": 0, 00:20:33.235 "data_size": 65536 00:20:33.235 }, 00:20:33.235 { 00:20:33.235 "name": "BaseBdev3", 00:20:33.235 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:33.235 "is_configured": true, 00:20:33.235 "data_offset": 0, 00:20:33.235 "data_size": 65536 00:20:33.235 } 00:20:33.235 ] 00:20:33.235 }' 00:20:33.235 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.235 14:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:33.801 14:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:34.059 [2024-07-25 14:03:23.042583] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.059 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:34.059 "name": "Existed_Raid", 00:20:34.059 "aliases": [ 00:20:34.059 "06be4710-4dd4-42ac-bc9e-24f63e6ca30d" 00:20:34.059 ], 00:20:34.059 "product_name": "Raid Volume", 00:20:34.059 "block_size": 512, 00:20:34.059 "num_blocks": 196608, 00:20:34.059 "uuid": "06be4710-4dd4-42ac-bc9e-24f63e6ca30d", 00:20:34.059 "assigned_rate_limits": { 00:20:34.059 "rw_ios_per_sec": 0, 00:20:34.059 "rw_mbytes_per_sec": 0, 00:20:34.059 "r_mbytes_per_sec": 0, 00:20:34.059 "w_mbytes_per_sec": 0 00:20:34.059 }, 00:20:34.059 "claimed": false, 00:20:34.059 "zoned": false, 00:20:34.059 "supported_io_types": { 00:20:34.059 "read": true, 00:20:34.059 "write": true, 00:20:34.059 "unmap": true, 00:20:34.059 "flush": true, 00:20:34.059 "reset": true, 00:20:34.059 "nvme_admin": false, 00:20:34.059 "nvme_io": false, 00:20:34.059 "nvme_io_md": false, 00:20:34.059 "write_zeroes": true, 00:20:34.059 "zcopy": false, 00:20:34.059 "get_zone_info": false, 00:20:34.059 "zone_management": false, 00:20:34.059 "zone_append": false, 00:20:34.059 "compare": false, 00:20:34.059 "compare_and_write": false, 00:20:34.059 "abort": false, 00:20:34.059 "seek_hole": false, 00:20:34.059 "seek_data": false, 00:20:34.059 "copy": false, 00:20:34.059 "nvme_iov_md": false 00:20:34.059 }, 00:20:34.059 "memory_domains": [ 00:20:34.059 { 00:20:34.059 "dma_device_id": "system", 00:20:34.059 "dma_device_type": 1 00:20:34.059 }, 00:20:34.059 { 00:20:34.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.059 "dma_device_type": 2 00:20:34.059 }, 00:20:34.059 { 00:20:34.059 "dma_device_id": "system", 00:20:34.059 "dma_device_type": 1 00:20:34.059 }, 00:20:34.059 { 00:20:34.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.059 "dma_device_type": 2 00:20:34.059 }, 00:20:34.059 { 00:20:34.059 "dma_device_id": "system", 00:20:34.059 "dma_device_type": 1 00:20:34.059 }, 00:20:34.059 { 00:20:34.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.059 "dma_device_type": 2 00:20:34.059 } 00:20:34.059 ], 00:20:34.060 "driver_specific": { 00:20:34.060 "raid": { 00:20:34.060 "uuid": "06be4710-4dd4-42ac-bc9e-24f63e6ca30d", 00:20:34.060 "strip_size_kb": 64, 00:20:34.060 "state": "online", 00:20:34.060 "raid_level": "concat", 00:20:34.060 "superblock": false, 00:20:34.060 "num_base_bdevs": 3, 00:20:34.060 "num_base_bdevs_discovered": 3, 00:20:34.060 "num_base_bdevs_operational": 3, 00:20:34.060 "base_bdevs_list": [ 00:20:34.060 { 00:20:34.060 "name": "NewBaseBdev", 00:20:34.060 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:34.060 "is_configured": true, 00:20:34.060 "data_offset": 0, 00:20:34.060 "data_size": 65536 00:20:34.060 }, 00:20:34.060 { 00:20:34.060 "name": "BaseBdev2", 00:20:34.060 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:34.060 "is_configured": true, 00:20:34.060 "data_offset": 0, 00:20:34.060 "data_size": 65536 00:20:34.060 }, 00:20:34.060 { 00:20:34.060 "name": "BaseBdev3", 00:20:34.060 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:34.060 "is_configured": true, 00:20:34.060 "data_offset": 0, 00:20:34.060 "data_size": 65536 00:20:34.060 } 00:20:34.060 ] 00:20:34.060 } 00:20:34.060 } 00:20:34.060 }' 00:20:34.060 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:34.317 BaseBdev2 00:20:34.317 BaseBdev3' 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:34.317 "name": "NewBaseBdev", 00:20:34.317 "aliases": [ 00:20:34.317 "8438c51a-856c-4a25-9720-6c9495f48333" 00:20:34.317 ], 00:20:34.317 "product_name": "Malloc disk", 00:20:34.317 "block_size": 512, 00:20:34.317 "num_blocks": 65536, 00:20:34.317 "uuid": "8438c51a-856c-4a25-9720-6c9495f48333", 00:20:34.317 "assigned_rate_limits": { 00:20:34.317 "rw_ios_per_sec": 0, 00:20:34.317 "rw_mbytes_per_sec": 0, 00:20:34.317 "r_mbytes_per_sec": 0, 00:20:34.317 "w_mbytes_per_sec": 0 00:20:34.317 }, 00:20:34.317 "claimed": true, 00:20:34.317 "claim_type": "exclusive_write", 00:20:34.317 "zoned": false, 00:20:34.317 "supported_io_types": { 00:20:34.317 "read": true, 00:20:34.317 "write": true, 00:20:34.317 "unmap": true, 00:20:34.317 "flush": true, 00:20:34.317 "reset": true, 00:20:34.317 "nvme_admin": false, 00:20:34.317 "nvme_io": false, 00:20:34.317 "nvme_io_md": false, 00:20:34.317 "write_zeroes": true, 00:20:34.317 "zcopy": true, 00:20:34.317 "get_zone_info": false, 00:20:34.317 "zone_management": false, 00:20:34.317 "zone_append": false, 00:20:34.317 "compare": false, 00:20:34.317 "compare_and_write": false, 00:20:34.317 "abort": true, 00:20:34.317 "seek_hole": false, 00:20:34.317 "seek_data": false, 00:20:34.317 "copy": true, 00:20:34.317 "nvme_iov_md": false 00:20:34.317 }, 00:20:34.317 "memory_domains": [ 00:20:34.317 { 00:20:34.317 "dma_device_id": "system", 00:20:34.317 "dma_device_type": 1 00:20:34.317 }, 00:20:34.317 { 00:20:34.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.317 "dma_device_type": 2 00:20:34.317 } 00:20:34.317 ], 00:20:34.317 "driver_specific": {} 00:20:34.317 }' 00:20:34.317 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.575 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:34.832 14:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.090 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.090 "name": "BaseBdev2", 00:20:35.090 "aliases": [ 00:20:35.090 "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6" 00:20:35.090 ], 00:20:35.090 "product_name": "Malloc disk", 00:20:35.090 "block_size": 512, 00:20:35.090 "num_blocks": 65536, 00:20:35.090 "uuid": "fc80aa3e-6c52-4de4-bbbd-f2ab202860b6", 00:20:35.090 "assigned_rate_limits": { 00:20:35.090 "rw_ios_per_sec": 0, 00:20:35.090 "rw_mbytes_per_sec": 0, 00:20:35.090 "r_mbytes_per_sec": 0, 00:20:35.090 "w_mbytes_per_sec": 0 00:20:35.090 }, 00:20:35.090 "claimed": true, 00:20:35.090 "claim_type": "exclusive_write", 00:20:35.090 "zoned": false, 00:20:35.090 "supported_io_types": { 00:20:35.090 "read": true, 00:20:35.090 "write": true, 00:20:35.090 "unmap": true, 00:20:35.090 "flush": true, 00:20:35.090 "reset": true, 00:20:35.090 "nvme_admin": false, 00:20:35.090 "nvme_io": false, 00:20:35.090 "nvme_io_md": false, 00:20:35.090 "write_zeroes": true, 00:20:35.090 "zcopy": true, 00:20:35.090 "get_zone_info": false, 00:20:35.090 "zone_management": false, 00:20:35.090 "zone_append": false, 00:20:35.090 "compare": false, 00:20:35.090 "compare_and_write": false, 00:20:35.090 "abort": true, 00:20:35.090 "seek_hole": false, 00:20:35.090 "seek_data": false, 00:20:35.090 "copy": true, 00:20:35.090 "nvme_iov_md": false 00:20:35.090 }, 00:20:35.090 "memory_domains": [ 00:20:35.090 { 00:20:35.090 "dma_device_id": "system", 00:20:35.090 "dma_device_type": 1 00:20:35.090 }, 00:20:35.090 { 00:20:35.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.090 "dma_device_type": 2 00:20:35.090 } 00:20:35.090 ], 00:20:35.090 "driver_specific": {} 00:20:35.090 }' 00:20:35.090 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.090 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.090 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.090 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:35.348 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.606 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.606 "name": "BaseBdev3", 00:20:35.606 "aliases": [ 00:20:35.606 "027a12c4-97fa-492f-874f-c91d8a2e7a90" 00:20:35.606 ], 00:20:35.606 "product_name": "Malloc disk", 00:20:35.606 "block_size": 512, 00:20:35.606 "num_blocks": 65536, 00:20:35.606 "uuid": "027a12c4-97fa-492f-874f-c91d8a2e7a90", 00:20:35.606 "assigned_rate_limits": { 00:20:35.606 "rw_ios_per_sec": 0, 00:20:35.606 "rw_mbytes_per_sec": 0, 00:20:35.606 "r_mbytes_per_sec": 0, 00:20:35.606 "w_mbytes_per_sec": 0 00:20:35.606 }, 00:20:35.606 "claimed": true, 00:20:35.606 "claim_type": "exclusive_write", 00:20:35.606 "zoned": false, 00:20:35.606 "supported_io_types": { 00:20:35.606 "read": true, 00:20:35.606 "write": true, 00:20:35.606 "unmap": true, 00:20:35.606 "flush": true, 00:20:35.606 "reset": true, 00:20:35.606 "nvme_admin": false, 00:20:35.606 "nvme_io": false, 00:20:35.606 "nvme_io_md": false, 00:20:35.606 "write_zeroes": true, 00:20:35.606 "zcopy": true, 00:20:35.606 "get_zone_info": false, 00:20:35.606 "zone_management": false, 00:20:35.606 "zone_append": false, 00:20:35.606 "compare": false, 00:20:35.606 "compare_and_write": false, 00:20:35.606 "abort": true, 00:20:35.606 "seek_hole": false, 00:20:35.606 "seek_data": false, 00:20:35.606 "copy": true, 00:20:35.606 "nvme_iov_md": false 00:20:35.606 }, 00:20:35.606 "memory_domains": [ 00:20:35.606 { 00:20:35.606 "dma_device_id": "system", 00:20:35.606 "dma_device_type": 1 00:20:35.606 }, 00:20:35.606 { 00:20:35.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.606 "dma_device_type": 2 00:20:35.606 } 00:20:35.606 ], 00:20:35.606 "driver_specific": {} 00:20:35.606 }' 00:20:35.606 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.863 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.120 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.120 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.120 14:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.378 [2024-07-25 14:03:25.210616] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.378 [2024-07-25 14:03:25.210665] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.378 [2024-07-25 14:03:25.210745] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.378 [2024-07-25 14:03:25.210812] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.378 [2024-07-25 14:03:25.210824] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 127959 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 127959 ']' 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 127959 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127959 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.378 killing process with pid 127959 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127959' 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 127959 00:20:36.378 [2024-07-25 14:03:25.255551] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.378 14:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 127959 00:20:36.658 [2024-07-25 14:03:25.505021] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.596 14:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:37.596 00:20:37.596 real 0m32.820s 00:20:37.596 user 1m0.858s 00:20:37.596 sys 0m3.878s 00:20:37.596 14:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:37.596 14:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.596 ************************************ 00:20:37.596 END TEST raid_state_function_test 00:20:37.596 ************************************ 00:20:37.856 14:03:26 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:37.856 14:03:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:37.856 14:03:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:37.856 14:03:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.856 ************************************ 00:20:37.856 START TEST raid_state_function_test_sb 00:20:37.856 ************************************ 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=128971 00:20:37.856 Process raid pid: 128971 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 128971' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 128971 /var/tmp/spdk-raid.sock 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 128971 ']' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:37.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.856 14:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.856 [2024-07-25 14:03:26.757210] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:20:37.856 [2024-07-25 14:03:26.757413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.113 [2024-07-25 14:03:26.921747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.371 [2024-07-25 14:03:27.163226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.371 [2024-07-25 14:03:27.366392] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.937 14:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.937 14:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:38.937 14:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:39.196 [2024-07-25 14:03:28.010687] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:39.196 [2024-07-25 14:03:28.010792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:39.196 [2024-07-25 14:03:28.010809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.196 [2024-07-25 14:03:28.010840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.196 [2024-07-25 14:03:28.010851] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:39.196 [2024-07-25 14:03:28.010870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.196 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.454 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.454 "name": "Existed_Raid", 00:20:39.454 "uuid": "0235fcf3-67eb-4dd4-a8e7-c7c79971ed40", 00:20:39.454 "strip_size_kb": 64, 00:20:39.454 "state": "configuring", 00:20:39.454 "raid_level": "concat", 00:20:39.454 "superblock": true, 00:20:39.454 "num_base_bdevs": 3, 00:20:39.454 "num_base_bdevs_discovered": 0, 00:20:39.454 "num_base_bdevs_operational": 3, 00:20:39.454 "base_bdevs_list": [ 00:20:39.454 { 00:20:39.454 "name": "BaseBdev1", 00:20:39.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.454 "is_configured": false, 00:20:39.454 "data_offset": 0, 00:20:39.454 "data_size": 0 00:20:39.454 }, 00:20:39.454 { 00:20:39.454 "name": "BaseBdev2", 00:20:39.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.454 "is_configured": false, 00:20:39.454 "data_offset": 0, 00:20:39.454 "data_size": 0 00:20:39.454 }, 00:20:39.454 { 00:20:39.454 "name": "BaseBdev3", 00:20:39.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.454 "is_configured": false, 00:20:39.454 "data_offset": 0, 00:20:39.454 "data_size": 0 00:20:39.454 } 00:20:39.454 ] 00:20:39.454 }' 00:20:39.455 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.455 14:03:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.081 14:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:40.340 [2024-07-25 14:03:29.246776] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:40.340 [2024-07-25 14:03:29.246847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:20:40.340 14:03:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:40.598 [2024-07-25 14:03:29.486836] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:40.598 [2024-07-25 14:03:29.486953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:40.598 [2024-07-25 14:03:29.486968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.598 [2024-07-25 14:03:29.487002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.598 [2024-07-25 14:03:29.487011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.598 [2024-07-25 14:03:29.487037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.598 14:03:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:40.856 [2024-07-25 14:03:29.762885] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.856 BaseBdev1 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:40.856 14:03:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.114 14:03:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:41.372 [ 00:20:41.372 { 00:20:41.372 "name": "BaseBdev1", 00:20:41.372 "aliases": [ 00:20:41.372 "e5609e23-702e-4797-b99f-57204b3b4610" 00:20:41.372 ], 00:20:41.372 "product_name": "Malloc disk", 00:20:41.372 "block_size": 512, 00:20:41.372 "num_blocks": 65536, 00:20:41.372 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:41.372 "assigned_rate_limits": { 00:20:41.372 "rw_ios_per_sec": 0, 00:20:41.372 "rw_mbytes_per_sec": 0, 00:20:41.372 "r_mbytes_per_sec": 0, 00:20:41.372 "w_mbytes_per_sec": 0 00:20:41.372 }, 00:20:41.372 "claimed": true, 00:20:41.372 "claim_type": "exclusive_write", 00:20:41.372 "zoned": false, 00:20:41.372 "supported_io_types": { 00:20:41.372 "read": true, 00:20:41.372 "write": true, 00:20:41.372 "unmap": true, 00:20:41.372 "flush": true, 00:20:41.373 "reset": true, 00:20:41.373 "nvme_admin": false, 00:20:41.373 "nvme_io": false, 00:20:41.373 "nvme_io_md": false, 00:20:41.373 "write_zeroes": true, 00:20:41.373 "zcopy": true, 00:20:41.373 "get_zone_info": false, 00:20:41.373 "zone_management": false, 00:20:41.373 "zone_append": false, 00:20:41.373 "compare": false, 00:20:41.373 "compare_and_write": false, 00:20:41.373 "abort": true, 00:20:41.373 "seek_hole": false, 00:20:41.373 "seek_data": false, 00:20:41.373 "copy": true, 00:20:41.373 "nvme_iov_md": false 00:20:41.373 }, 00:20:41.373 "memory_domains": [ 00:20:41.373 { 00:20:41.373 "dma_device_id": "system", 00:20:41.373 "dma_device_type": 1 00:20:41.373 }, 00:20:41.373 { 00:20:41.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.373 "dma_device_type": 2 00:20:41.373 } 00:20:41.373 ], 00:20:41.373 "driver_specific": {} 00:20:41.373 } 00:20:41.373 ] 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.373 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.630 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.630 "name": "Existed_Raid", 00:20:41.630 "uuid": "be0d2d02-61c2-49e1-8002-f4670ea9d9e4", 00:20:41.630 "strip_size_kb": 64, 00:20:41.630 "state": "configuring", 00:20:41.630 "raid_level": "concat", 00:20:41.630 "superblock": true, 00:20:41.630 "num_base_bdevs": 3, 00:20:41.630 "num_base_bdevs_discovered": 1, 00:20:41.630 "num_base_bdevs_operational": 3, 00:20:41.630 "base_bdevs_list": [ 00:20:41.630 { 00:20:41.630 "name": "BaseBdev1", 00:20:41.630 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:41.630 "is_configured": true, 00:20:41.630 "data_offset": 2048, 00:20:41.630 "data_size": 63488 00:20:41.630 }, 00:20:41.630 { 00:20:41.630 "name": "BaseBdev2", 00:20:41.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.630 "is_configured": false, 00:20:41.630 "data_offset": 0, 00:20:41.630 "data_size": 0 00:20:41.630 }, 00:20:41.630 { 00:20:41.630 "name": "BaseBdev3", 00:20:41.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.630 "is_configured": false, 00:20:41.630 "data_offset": 0, 00:20:41.630 "data_size": 0 00:20:41.630 } 00:20:41.630 ] 00:20:41.630 }' 00:20:41.630 14:03:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.630 14:03:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.216 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:42.475 [2024-07-25 14:03:31.423296] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.475 [2024-07-25 14:03:31.423383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:20:42.475 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:42.732 [2024-07-25 14:03:31.667386] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.732 [2024-07-25 14:03:31.669597] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:42.732 [2024-07-25 14:03:31.669691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:42.732 [2024-07-25 14:03:31.669705] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:42.732 [2024-07-25 14:03:31.669755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:42.732 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:42.732 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:42.732 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:42.732 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.733 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.991 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.991 "name": "Existed_Raid", 00:20:42.991 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:42.991 "strip_size_kb": 64, 00:20:42.991 "state": "configuring", 00:20:42.991 "raid_level": "concat", 00:20:42.991 "superblock": true, 00:20:42.991 "num_base_bdevs": 3, 00:20:42.991 "num_base_bdevs_discovered": 1, 00:20:42.991 "num_base_bdevs_operational": 3, 00:20:42.991 "base_bdevs_list": [ 00:20:42.991 { 00:20:42.991 "name": "BaseBdev1", 00:20:42.991 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:42.991 "is_configured": true, 00:20:42.991 "data_offset": 2048, 00:20:42.991 "data_size": 63488 00:20:42.991 }, 00:20:42.991 { 00:20:42.991 "name": "BaseBdev2", 00:20:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.991 "is_configured": false, 00:20:42.991 "data_offset": 0, 00:20:42.991 "data_size": 0 00:20:42.991 }, 00:20:42.991 { 00:20:42.991 "name": "BaseBdev3", 00:20:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.991 "is_configured": false, 00:20:42.991 "data_offset": 0, 00:20:42.991 "data_size": 0 00:20:42.991 } 00:20:42.991 ] 00:20:42.991 }' 00:20:42.991 14:03:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.991 14:03:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.558 14:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:44.157 [2024-07-25 14:03:32.876215] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.157 BaseBdev2 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:44.157 14:03:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.157 14:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.723 [ 00:20:44.723 { 00:20:44.723 "name": "BaseBdev2", 00:20:44.723 "aliases": [ 00:20:44.723 "14e701ec-7d5b-449b-a872-8ff96a21eef3" 00:20:44.723 ], 00:20:44.723 "product_name": "Malloc disk", 00:20:44.723 "block_size": 512, 00:20:44.723 "num_blocks": 65536, 00:20:44.723 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:44.723 "assigned_rate_limits": { 00:20:44.723 "rw_ios_per_sec": 0, 00:20:44.723 "rw_mbytes_per_sec": 0, 00:20:44.723 "r_mbytes_per_sec": 0, 00:20:44.723 "w_mbytes_per_sec": 0 00:20:44.723 }, 00:20:44.723 "claimed": true, 00:20:44.723 "claim_type": "exclusive_write", 00:20:44.723 "zoned": false, 00:20:44.723 "supported_io_types": { 00:20:44.723 "read": true, 00:20:44.723 "write": true, 00:20:44.723 "unmap": true, 00:20:44.723 "flush": true, 00:20:44.723 "reset": true, 00:20:44.723 "nvme_admin": false, 00:20:44.723 "nvme_io": false, 00:20:44.723 "nvme_io_md": false, 00:20:44.723 "write_zeroes": true, 00:20:44.723 "zcopy": true, 00:20:44.723 "get_zone_info": false, 00:20:44.723 "zone_management": false, 00:20:44.723 "zone_append": false, 00:20:44.723 "compare": false, 00:20:44.723 "compare_and_write": false, 00:20:44.723 "abort": true, 00:20:44.723 "seek_hole": false, 00:20:44.723 "seek_data": false, 00:20:44.723 "copy": true, 00:20:44.723 "nvme_iov_md": false 00:20:44.723 }, 00:20:44.723 "memory_domains": [ 00:20:44.723 { 00:20:44.723 "dma_device_id": "system", 00:20:44.723 "dma_device_type": 1 00:20:44.723 }, 00:20:44.723 { 00:20:44.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.723 "dma_device_type": 2 00:20:44.723 } 00:20:44.723 ], 00:20:44.723 "driver_specific": {} 00:20:44.723 } 00:20:44.723 ] 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.723 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.981 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.981 "name": "Existed_Raid", 00:20:44.981 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:44.981 "strip_size_kb": 64, 00:20:44.981 "state": "configuring", 00:20:44.981 "raid_level": "concat", 00:20:44.981 "superblock": true, 00:20:44.981 "num_base_bdevs": 3, 00:20:44.981 "num_base_bdevs_discovered": 2, 00:20:44.981 "num_base_bdevs_operational": 3, 00:20:44.981 "base_bdevs_list": [ 00:20:44.981 { 00:20:44.981 "name": "BaseBdev1", 00:20:44.981 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:44.981 "is_configured": true, 00:20:44.981 "data_offset": 2048, 00:20:44.981 "data_size": 63488 00:20:44.981 }, 00:20:44.981 { 00:20:44.981 "name": "BaseBdev2", 00:20:44.981 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:44.981 "is_configured": true, 00:20:44.981 "data_offset": 2048, 00:20:44.981 "data_size": 63488 00:20:44.981 }, 00:20:44.981 { 00:20:44.981 "name": "BaseBdev3", 00:20:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.981 "is_configured": false, 00:20:44.981 "data_offset": 0, 00:20:44.981 "data_size": 0 00:20:44.981 } 00:20:44.981 ] 00:20:44.981 }' 00:20:44.981 14:03:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.981 14:03:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.547 14:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:45.805 [2024-07-25 14:03:34.815988] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.805 [2024-07-25 14:03:34.816274] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:20:45.805 [2024-07-25 14:03:34.816291] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:45.805 [2024-07-25 14:03:34.816423] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:45.805 [2024-07-25 14:03:34.816803] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:20:45.805 [2024-07-25 14:03:34.816830] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:20:45.805 [2024-07-25 14:03:34.816991] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.805 BaseBdev3 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.805 14:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.372 [ 00:20:46.372 { 00:20:46.372 "name": "BaseBdev3", 00:20:46.372 "aliases": [ 00:20:46.372 "87930c5d-ef33-42e2-836b-f16a9d8a9478" 00:20:46.372 ], 00:20:46.372 "product_name": "Malloc disk", 00:20:46.372 "block_size": 512, 00:20:46.372 "num_blocks": 65536, 00:20:46.372 "uuid": "87930c5d-ef33-42e2-836b-f16a9d8a9478", 00:20:46.372 "assigned_rate_limits": { 00:20:46.372 "rw_ios_per_sec": 0, 00:20:46.372 "rw_mbytes_per_sec": 0, 00:20:46.372 "r_mbytes_per_sec": 0, 00:20:46.372 "w_mbytes_per_sec": 0 00:20:46.372 }, 00:20:46.372 "claimed": true, 00:20:46.372 "claim_type": "exclusive_write", 00:20:46.372 "zoned": false, 00:20:46.372 "supported_io_types": { 00:20:46.372 "read": true, 00:20:46.372 "write": true, 00:20:46.372 "unmap": true, 00:20:46.372 "flush": true, 00:20:46.372 "reset": true, 00:20:46.372 "nvme_admin": false, 00:20:46.372 "nvme_io": false, 00:20:46.372 "nvme_io_md": false, 00:20:46.372 "write_zeroes": true, 00:20:46.372 "zcopy": true, 00:20:46.372 "get_zone_info": false, 00:20:46.372 "zone_management": false, 00:20:46.372 "zone_append": false, 00:20:46.372 "compare": false, 00:20:46.372 "compare_and_write": false, 00:20:46.372 "abort": true, 00:20:46.372 "seek_hole": false, 00:20:46.372 "seek_data": false, 00:20:46.372 "copy": true, 00:20:46.372 "nvme_iov_md": false 00:20:46.372 }, 00:20:46.372 "memory_domains": [ 00:20:46.372 { 00:20:46.372 "dma_device_id": "system", 00:20:46.372 "dma_device_type": 1 00:20:46.372 }, 00:20:46.372 { 00:20:46.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.372 "dma_device_type": 2 00:20:46.372 } 00:20:46.372 ], 00:20:46.372 "driver_specific": {} 00:20:46.372 } 00:20:46.372 ] 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.372 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.630 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.630 "name": "Existed_Raid", 00:20:46.630 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:46.630 "strip_size_kb": 64, 00:20:46.630 "state": "online", 00:20:46.630 "raid_level": "concat", 00:20:46.630 "superblock": true, 00:20:46.630 "num_base_bdevs": 3, 00:20:46.630 "num_base_bdevs_discovered": 3, 00:20:46.630 "num_base_bdevs_operational": 3, 00:20:46.630 "base_bdevs_list": [ 00:20:46.630 { 00:20:46.630 "name": "BaseBdev1", 00:20:46.630 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:46.630 "is_configured": true, 00:20:46.630 "data_offset": 2048, 00:20:46.630 "data_size": 63488 00:20:46.630 }, 00:20:46.630 { 00:20:46.630 "name": "BaseBdev2", 00:20:46.630 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:46.630 "is_configured": true, 00:20:46.630 "data_offset": 2048, 00:20:46.630 "data_size": 63488 00:20:46.630 }, 00:20:46.630 { 00:20:46.630 "name": "BaseBdev3", 00:20:46.630 "uuid": "87930c5d-ef33-42e2-836b-f16a9d8a9478", 00:20:46.630 "is_configured": true, 00:20:46.630 "data_offset": 2048, 00:20:46.630 "data_size": 63488 00:20:46.630 } 00:20:46.630 ] 00:20:46.630 }' 00:20:46.630 14:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.630 14:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:47.567 [2024-07-25 14:03:36.560748] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.567 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:47.567 "name": "Existed_Raid", 00:20:47.567 "aliases": [ 00:20:47.567 "c2aca837-fc8a-45bd-b696-977a636114f3" 00:20:47.567 ], 00:20:47.567 "product_name": "Raid Volume", 00:20:47.567 "block_size": 512, 00:20:47.567 "num_blocks": 190464, 00:20:47.567 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:47.567 "assigned_rate_limits": { 00:20:47.567 "rw_ios_per_sec": 0, 00:20:47.567 "rw_mbytes_per_sec": 0, 00:20:47.567 "r_mbytes_per_sec": 0, 00:20:47.567 "w_mbytes_per_sec": 0 00:20:47.567 }, 00:20:47.567 "claimed": false, 00:20:47.567 "zoned": false, 00:20:47.567 "supported_io_types": { 00:20:47.567 "read": true, 00:20:47.567 "write": true, 00:20:47.567 "unmap": true, 00:20:47.567 "flush": true, 00:20:47.567 "reset": true, 00:20:47.567 "nvme_admin": false, 00:20:47.567 "nvme_io": false, 00:20:47.567 "nvme_io_md": false, 00:20:47.567 "write_zeroes": true, 00:20:47.567 "zcopy": false, 00:20:47.567 "get_zone_info": false, 00:20:47.567 "zone_management": false, 00:20:47.567 "zone_append": false, 00:20:47.567 "compare": false, 00:20:47.567 "compare_and_write": false, 00:20:47.567 "abort": false, 00:20:47.567 "seek_hole": false, 00:20:47.567 "seek_data": false, 00:20:47.567 "copy": false, 00:20:47.567 "nvme_iov_md": false 00:20:47.567 }, 00:20:47.567 "memory_domains": [ 00:20:47.567 { 00:20:47.567 "dma_device_id": "system", 00:20:47.567 "dma_device_type": 1 00:20:47.567 }, 00:20:47.567 { 00:20:47.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.567 "dma_device_type": 2 00:20:47.567 }, 00:20:47.567 { 00:20:47.567 "dma_device_id": "system", 00:20:47.568 "dma_device_type": 1 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.568 "dma_device_type": 2 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "dma_device_id": "system", 00:20:47.568 "dma_device_type": 1 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.568 "dma_device_type": 2 00:20:47.568 } 00:20:47.568 ], 00:20:47.568 "driver_specific": { 00:20:47.568 "raid": { 00:20:47.568 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:47.568 "strip_size_kb": 64, 00:20:47.568 "state": "online", 00:20:47.568 "raid_level": "concat", 00:20:47.568 "superblock": true, 00:20:47.568 "num_base_bdevs": 3, 00:20:47.568 "num_base_bdevs_discovered": 3, 00:20:47.568 "num_base_bdevs_operational": 3, 00:20:47.568 "base_bdevs_list": [ 00:20:47.568 { 00:20:47.568 "name": "BaseBdev1", 00:20:47.568 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:47.568 "is_configured": true, 00:20:47.568 "data_offset": 2048, 00:20:47.568 "data_size": 63488 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "name": "BaseBdev2", 00:20:47.568 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:47.568 "is_configured": true, 00:20:47.568 "data_offset": 2048, 00:20:47.568 "data_size": 63488 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "name": "BaseBdev3", 00:20:47.568 "uuid": "87930c5d-ef33-42e2-836b-f16a9d8a9478", 00:20:47.568 "is_configured": true, 00:20:47.568 "data_offset": 2048, 00:20:47.568 "data_size": 63488 00:20:47.568 } 00:20:47.568 ] 00:20:47.568 } 00:20:47.568 } 00:20:47.568 }' 00:20:47.568 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.826 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:47.826 BaseBdev2 00:20:47.826 BaseBdev3' 00:20:47.826 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.827 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:47.827 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.085 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.085 "name": "BaseBdev1", 00:20:48.085 "aliases": [ 00:20:48.085 "e5609e23-702e-4797-b99f-57204b3b4610" 00:20:48.085 ], 00:20:48.085 "product_name": "Malloc disk", 00:20:48.085 "block_size": 512, 00:20:48.085 "num_blocks": 65536, 00:20:48.085 "uuid": "e5609e23-702e-4797-b99f-57204b3b4610", 00:20:48.085 "assigned_rate_limits": { 00:20:48.085 "rw_ios_per_sec": 0, 00:20:48.085 "rw_mbytes_per_sec": 0, 00:20:48.085 "r_mbytes_per_sec": 0, 00:20:48.085 "w_mbytes_per_sec": 0 00:20:48.085 }, 00:20:48.085 "claimed": true, 00:20:48.085 "claim_type": "exclusive_write", 00:20:48.085 "zoned": false, 00:20:48.085 "supported_io_types": { 00:20:48.085 "read": true, 00:20:48.085 "write": true, 00:20:48.085 "unmap": true, 00:20:48.085 "flush": true, 00:20:48.085 "reset": true, 00:20:48.085 "nvme_admin": false, 00:20:48.085 "nvme_io": false, 00:20:48.085 "nvme_io_md": false, 00:20:48.085 "write_zeroes": true, 00:20:48.085 "zcopy": true, 00:20:48.085 "get_zone_info": false, 00:20:48.085 "zone_management": false, 00:20:48.085 "zone_append": false, 00:20:48.085 "compare": false, 00:20:48.085 "compare_and_write": false, 00:20:48.085 "abort": true, 00:20:48.085 "seek_hole": false, 00:20:48.085 "seek_data": false, 00:20:48.085 "copy": true, 00:20:48.085 "nvme_iov_md": false 00:20:48.085 }, 00:20:48.085 "memory_domains": [ 00:20:48.085 { 00:20:48.085 "dma_device_id": "system", 00:20:48.085 "dma_device_type": 1 00:20:48.085 }, 00:20:48.085 { 00:20:48.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.085 "dma_device_type": 2 00:20:48.085 } 00:20:48.085 ], 00:20:48.085 "driver_specific": {} 00:20:48.085 }' 00:20:48.085 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.085 14:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.085 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.085 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.085 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.085 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.085 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.343 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:48.601 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.601 "name": "BaseBdev2", 00:20:48.601 "aliases": [ 00:20:48.601 "14e701ec-7d5b-449b-a872-8ff96a21eef3" 00:20:48.601 ], 00:20:48.601 "product_name": "Malloc disk", 00:20:48.601 "block_size": 512, 00:20:48.601 "num_blocks": 65536, 00:20:48.601 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:48.601 "assigned_rate_limits": { 00:20:48.601 "rw_ios_per_sec": 0, 00:20:48.601 "rw_mbytes_per_sec": 0, 00:20:48.601 "r_mbytes_per_sec": 0, 00:20:48.601 "w_mbytes_per_sec": 0 00:20:48.601 }, 00:20:48.601 "claimed": true, 00:20:48.601 "claim_type": "exclusive_write", 00:20:48.601 "zoned": false, 00:20:48.601 "supported_io_types": { 00:20:48.601 "read": true, 00:20:48.601 "write": true, 00:20:48.601 "unmap": true, 00:20:48.601 "flush": true, 00:20:48.601 "reset": true, 00:20:48.601 "nvme_admin": false, 00:20:48.601 "nvme_io": false, 00:20:48.601 "nvme_io_md": false, 00:20:48.601 "write_zeroes": true, 00:20:48.601 "zcopy": true, 00:20:48.601 "get_zone_info": false, 00:20:48.601 "zone_management": false, 00:20:48.601 "zone_append": false, 00:20:48.601 "compare": false, 00:20:48.601 "compare_and_write": false, 00:20:48.601 "abort": true, 00:20:48.601 "seek_hole": false, 00:20:48.601 "seek_data": false, 00:20:48.601 "copy": true, 00:20:48.601 "nvme_iov_md": false 00:20:48.601 }, 00:20:48.601 "memory_domains": [ 00:20:48.601 { 00:20:48.601 "dma_device_id": "system", 00:20:48.601 "dma_device_type": 1 00:20:48.601 }, 00:20:48.601 { 00:20:48.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.601 "dma_device_type": 2 00:20:48.601 } 00:20:48.601 ], 00:20:48.601 "driver_specific": {} 00:20:48.601 }' 00:20:48.601 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.601 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.860 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.118 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.118 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:49.118 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:49.118 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:49.118 14:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:49.377 "name": "BaseBdev3", 00:20:49.377 "aliases": [ 00:20:49.377 "87930c5d-ef33-42e2-836b-f16a9d8a9478" 00:20:49.377 ], 00:20:49.377 "product_name": "Malloc disk", 00:20:49.377 "block_size": 512, 00:20:49.377 "num_blocks": 65536, 00:20:49.377 "uuid": "87930c5d-ef33-42e2-836b-f16a9d8a9478", 00:20:49.377 "assigned_rate_limits": { 00:20:49.377 "rw_ios_per_sec": 0, 00:20:49.377 "rw_mbytes_per_sec": 0, 00:20:49.377 "r_mbytes_per_sec": 0, 00:20:49.377 "w_mbytes_per_sec": 0 00:20:49.377 }, 00:20:49.377 "claimed": true, 00:20:49.377 "claim_type": "exclusive_write", 00:20:49.377 "zoned": false, 00:20:49.377 "supported_io_types": { 00:20:49.377 "read": true, 00:20:49.377 "write": true, 00:20:49.377 "unmap": true, 00:20:49.377 "flush": true, 00:20:49.377 "reset": true, 00:20:49.377 "nvme_admin": false, 00:20:49.377 "nvme_io": false, 00:20:49.377 "nvme_io_md": false, 00:20:49.377 "write_zeroes": true, 00:20:49.377 "zcopy": true, 00:20:49.377 "get_zone_info": false, 00:20:49.377 "zone_management": false, 00:20:49.377 "zone_append": false, 00:20:49.377 "compare": false, 00:20:49.377 "compare_and_write": false, 00:20:49.377 "abort": true, 00:20:49.377 "seek_hole": false, 00:20:49.377 "seek_data": false, 00:20:49.377 "copy": true, 00:20:49.377 "nvme_iov_md": false 00:20:49.377 }, 00:20:49.377 "memory_domains": [ 00:20:49.377 { 00:20:49.377 "dma_device_id": "system", 00:20:49.377 "dma_device_type": 1 00:20:49.377 }, 00:20:49.377 { 00:20:49.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.377 "dma_device_type": 2 00:20:49.377 } 00:20:49.377 ], 00:20:49.377 "driver_specific": {} 00:20:49.377 }' 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.377 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.635 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.892 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:49.892 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:49.892 [2024-07-25 14:03:38.900986] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.892 [2024-07-25 14:03:38.901305] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.892 [2024-07-25 14:03:38.901482] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.150 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:50.150 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:50.150 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:50.150 14:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:50.150 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:50.150 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.151 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.409 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.409 "name": "Existed_Raid", 00:20:50.409 "uuid": "c2aca837-fc8a-45bd-b696-977a636114f3", 00:20:50.409 "strip_size_kb": 64, 00:20:50.409 "state": "offline", 00:20:50.409 "raid_level": "concat", 00:20:50.409 "superblock": true, 00:20:50.409 "num_base_bdevs": 3, 00:20:50.409 "num_base_bdevs_discovered": 2, 00:20:50.409 "num_base_bdevs_operational": 2, 00:20:50.409 "base_bdevs_list": [ 00:20:50.409 { 00:20:50.409 "name": null, 00:20:50.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.409 "is_configured": false, 00:20:50.409 "data_offset": 2048, 00:20:50.409 "data_size": 63488 00:20:50.409 }, 00:20:50.409 { 00:20:50.409 "name": "BaseBdev2", 00:20:50.409 "uuid": "14e701ec-7d5b-449b-a872-8ff96a21eef3", 00:20:50.409 "is_configured": true, 00:20:50.409 "data_offset": 2048, 00:20:50.409 "data_size": 63488 00:20:50.409 }, 00:20:50.409 { 00:20:50.409 "name": "BaseBdev3", 00:20:50.409 "uuid": "87930c5d-ef33-42e2-836b-f16a9d8a9478", 00:20:50.409 "is_configured": true, 00:20:50.409 "data_offset": 2048, 00:20:50.409 "data_size": 63488 00:20:50.409 } 00:20:50.409 ] 00:20:50.409 }' 00:20:50.409 14:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.409 14:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.044 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:51.044 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:51.044 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.044 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:51.302 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:51.302 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:51.302 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:51.561 [2024-07-25 14:03:40.554945] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:51.819 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:51.819 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:51.819 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.819 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:52.079 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:52.079 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:52.079 14:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:52.336 [2024-07-25 14:03:41.216020] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:52.336 [2024-07-25 14:03:41.216410] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:20:52.336 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:52.336 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:52.336 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.337 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:52.594 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:52.853 BaseBdev2 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:53.111 14:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.369 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:53.369 [ 00:20:53.369 { 00:20:53.369 "name": "BaseBdev2", 00:20:53.369 "aliases": [ 00:20:53.369 "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1" 00:20:53.369 ], 00:20:53.369 "product_name": "Malloc disk", 00:20:53.369 "block_size": 512, 00:20:53.369 "num_blocks": 65536, 00:20:53.369 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:20:53.369 "assigned_rate_limits": { 00:20:53.369 "rw_ios_per_sec": 0, 00:20:53.369 "rw_mbytes_per_sec": 0, 00:20:53.369 "r_mbytes_per_sec": 0, 00:20:53.369 "w_mbytes_per_sec": 0 00:20:53.369 }, 00:20:53.369 "claimed": false, 00:20:53.369 "zoned": false, 00:20:53.369 "supported_io_types": { 00:20:53.369 "read": true, 00:20:53.369 "write": true, 00:20:53.369 "unmap": true, 00:20:53.369 "flush": true, 00:20:53.369 "reset": true, 00:20:53.369 "nvme_admin": false, 00:20:53.369 "nvme_io": false, 00:20:53.369 "nvme_io_md": false, 00:20:53.369 "write_zeroes": true, 00:20:53.369 "zcopy": true, 00:20:53.369 "get_zone_info": false, 00:20:53.369 "zone_management": false, 00:20:53.369 "zone_append": false, 00:20:53.370 "compare": false, 00:20:53.370 "compare_and_write": false, 00:20:53.370 "abort": true, 00:20:53.370 "seek_hole": false, 00:20:53.370 "seek_data": false, 00:20:53.370 "copy": true, 00:20:53.370 "nvme_iov_md": false 00:20:53.370 }, 00:20:53.370 "memory_domains": [ 00:20:53.370 { 00:20:53.370 "dma_device_id": "system", 00:20:53.370 "dma_device_type": 1 00:20:53.370 }, 00:20:53.370 { 00:20:53.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.370 "dma_device_type": 2 00:20:53.370 } 00:20:53.370 ], 00:20:53.370 "driver_specific": {} 00:20:53.370 } 00:20:53.370 ] 00:20:53.627 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:53.627 14:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:53.627 14:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:53.628 14:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:53.885 BaseBdev3 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:53.885 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.143 14:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:54.401 [ 00:20:54.401 { 00:20:54.401 "name": "BaseBdev3", 00:20:54.401 "aliases": [ 00:20:54.401 "c9cb8a75-e751-4728-af45-5bb38cb61bd4" 00:20:54.401 ], 00:20:54.401 "product_name": "Malloc disk", 00:20:54.401 "block_size": 512, 00:20:54.401 "num_blocks": 65536, 00:20:54.401 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:20:54.401 "assigned_rate_limits": { 00:20:54.401 "rw_ios_per_sec": 0, 00:20:54.401 "rw_mbytes_per_sec": 0, 00:20:54.401 "r_mbytes_per_sec": 0, 00:20:54.401 "w_mbytes_per_sec": 0 00:20:54.401 }, 00:20:54.401 "claimed": false, 00:20:54.401 "zoned": false, 00:20:54.401 "supported_io_types": { 00:20:54.401 "read": true, 00:20:54.401 "write": true, 00:20:54.401 "unmap": true, 00:20:54.401 "flush": true, 00:20:54.401 "reset": true, 00:20:54.401 "nvme_admin": false, 00:20:54.401 "nvme_io": false, 00:20:54.401 "nvme_io_md": false, 00:20:54.401 "write_zeroes": true, 00:20:54.401 "zcopy": true, 00:20:54.401 "get_zone_info": false, 00:20:54.401 "zone_management": false, 00:20:54.401 "zone_append": false, 00:20:54.401 "compare": false, 00:20:54.401 "compare_and_write": false, 00:20:54.401 "abort": true, 00:20:54.401 "seek_hole": false, 00:20:54.401 "seek_data": false, 00:20:54.401 "copy": true, 00:20:54.401 "nvme_iov_md": false 00:20:54.401 }, 00:20:54.401 "memory_domains": [ 00:20:54.401 { 00:20:54.401 "dma_device_id": "system", 00:20:54.401 "dma_device_type": 1 00:20:54.401 }, 00:20:54.401 { 00:20:54.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.401 "dma_device_type": 2 00:20:54.401 } 00:20:54.401 ], 00:20:54.401 "driver_specific": {} 00:20:54.401 } 00:20:54.401 ] 00:20:54.401 14:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:54.401 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:54.401 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:54.401 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:54.660 [2024-07-25 14:03:43.544784] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:54.660 [2024-07-25 14:03:43.545112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:54.660 [2024-07-25 14:03:43.545287] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.660 [2024-07-25 14:03:43.547621] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.660 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.918 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.918 "name": "Existed_Raid", 00:20:54.918 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:20:54.919 "strip_size_kb": 64, 00:20:54.919 "state": "configuring", 00:20:54.919 "raid_level": "concat", 00:20:54.919 "superblock": true, 00:20:54.919 "num_base_bdevs": 3, 00:20:54.919 "num_base_bdevs_discovered": 2, 00:20:54.919 "num_base_bdevs_operational": 3, 00:20:54.919 "base_bdevs_list": [ 00:20:54.919 { 00:20:54.919 "name": "BaseBdev1", 00:20:54.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.919 "is_configured": false, 00:20:54.919 "data_offset": 0, 00:20:54.919 "data_size": 0 00:20:54.919 }, 00:20:54.919 { 00:20:54.919 "name": "BaseBdev2", 00:20:54.919 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:20:54.919 "is_configured": true, 00:20:54.919 "data_offset": 2048, 00:20:54.919 "data_size": 63488 00:20:54.919 }, 00:20:54.919 { 00:20:54.919 "name": "BaseBdev3", 00:20:54.919 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:20:54.919 "is_configured": true, 00:20:54.919 "data_offset": 2048, 00:20:54.919 "data_size": 63488 00:20:54.919 } 00:20:54.919 ] 00:20:54.919 }' 00:20:54.919 14:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.919 14:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:55.863 [2024-07-25 14:03:44.824924] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.863 14:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.122 14:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.122 "name": "Existed_Raid", 00:20:56.122 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:20:56.122 "strip_size_kb": 64, 00:20:56.122 "state": "configuring", 00:20:56.122 "raid_level": "concat", 00:20:56.122 "superblock": true, 00:20:56.122 "num_base_bdevs": 3, 00:20:56.122 "num_base_bdevs_discovered": 1, 00:20:56.122 "num_base_bdevs_operational": 3, 00:20:56.122 "base_bdevs_list": [ 00:20:56.122 { 00:20:56.122 "name": "BaseBdev1", 00:20:56.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.122 "is_configured": false, 00:20:56.122 "data_offset": 0, 00:20:56.122 "data_size": 0 00:20:56.122 }, 00:20:56.122 { 00:20:56.122 "name": null, 00:20:56.122 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:20:56.122 "is_configured": false, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 }, 00:20:56.122 { 00:20:56.122 "name": "BaseBdev3", 00:20:56.122 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:20:56.122 "is_configured": true, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 } 00:20:56.122 ] 00:20:56.122 }' 00:20:56.122 14:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.122 14:03:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.057 14:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.057 14:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:57.315 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:57.315 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.574 [2024-07-25 14:03:46.464703] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.574 BaseBdev1 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:57.574 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.832 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:58.091 [ 00:20:58.091 { 00:20:58.091 "name": "BaseBdev1", 00:20:58.091 "aliases": [ 00:20:58.091 "c3442eda-1926-467e-acbd-382ca6d43065" 00:20:58.091 ], 00:20:58.091 "product_name": "Malloc disk", 00:20:58.091 "block_size": 512, 00:20:58.091 "num_blocks": 65536, 00:20:58.091 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:20:58.091 "assigned_rate_limits": { 00:20:58.091 "rw_ios_per_sec": 0, 00:20:58.091 "rw_mbytes_per_sec": 0, 00:20:58.091 "r_mbytes_per_sec": 0, 00:20:58.091 "w_mbytes_per_sec": 0 00:20:58.091 }, 00:20:58.091 "claimed": true, 00:20:58.091 "claim_type": "exclusive_write", 00:20:58.091 "zoned": false, 00:20:58.091 "supported_io_types": { 00:20:58.091 "read": true, 00:20:58.091 "write": true, 00:20:58.091 "unmap": true, 00:20:58.091 "flush": true, 00:20:58.091 "reset": true, 00:20:58.091 "nvme_admin": false, 00:20:58.091 "nvme_io": false, 00:20:58.091 "nvme_io_md": false, 00:20:58.091 "write_zeroes": true, 00:20:58.091 "zcopy": true, 00:20:58.091 "get_zone_info": false, 00:20:58.091 "zone_management": false, 00:20:58.091 "zone_append": false, 00:20:58.091 "compare": false, 00:20:58.091 "compare_and_write": false, 00:20:58.091 "abort": true, 00:20:58.091 "seek_hole": false, 00:20:58.091 "seek_data": false, 00:20:58.091 "copy": true, 00:20:58.091 "nvme_iov_md": false 00:20:58.091 }, 00:20:58.091 "memory_domains": [ 00:20:58.091 { 00:20:58.091 "dma_device_id": "system", 00:20:58.091 "dma_device_type": 1 00:20:58.091 }, 00:20:58.091 { 00:20:58.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.091 "dma_device_type": 2 00:20:58.091 } 00:20:58.091 ], 00:20:58.091 "driver_specific": {} 00:20:58.091 } 00:20:58.091 ] 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.091 14:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.349 14:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.349 "name": "Existed_Raid", 00:20:58.349 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:20:58.349 "strip_size_kb": 64, 00:20:58.349 "state": "configuring", 00:20:58.349 "raid_level": "concat", 00:20:58.349 "superblock": true, 00:20:58.349 "num_base_bdevs": 3, 00:20:58.349 "num_base_bdevs_discovered": 2, 00:20:58.349 "num_base_bdevs_operational": 3, 00:20:58.349 "base_bdevs_list": [ 00:20:58.349 { 00:20:58.349 "name": "BaseBdev1", 00:20:58.349 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:20:58.349 "is_configured": true, 00:20:58.349 "data_offset": 2048, 00:20:58.349 "data_size": 63488 00:20:58.349 }, 00:20:58.349 { 00:20:58.349 "name": null, 00:20:58.349 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:20:58.349 "is_configured": false, 00:20:58.349 "data_offset": 2048, 00:20:58.349 "data_size": 63488 00:20:58.349 }, 00:20:58.349 { 00:20:58.349 "name": "BaseBdev3", 00:20:58.349 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:20:58.349 "is_configured": true, 00:20:58.349 "data_offset": 2048, 00:20:58.349 "data_size": 63488 00:20:58.349 } 00:20:58.349 ] 00:20:58.349 }' 00:20:58.349 14:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.349 14:03:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.915 14:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.915 14:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:59.481 [2024-07-25 14:03:48.490550] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:59.481 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.482 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.048 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.048 "name": "Existed_Raid", 00:21:00.048 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:00.048 "strip_size_kb": 64, 00:21:00.048 "state": "configuring", 00:21:00.048 "raid_level": "concat", 00:21:00.048 "superblock": true, 00:21:00.048 "num_base_bdevs": 3, 00:21:00.048 "num_base_bdevs_discovered": 1, 00:21:00.048 "num_base_bdevs_operational": 3, 00:21:00.048 "base_bdevs_list": [ 00:21:00.048 { 00:21:00.048 "name": "BaseBdev1", 00:21:00.048 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:00.048 "is_configured": true, 00:21:00.048 "data_offset": 2048, 00:21:00.048 "data_size": 63488 00:21:00.048 }, 00:21:00.048 { 00:21:00.049 "name": null, 00:21:00.049 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:00.049 "is_configured": false, 00:21:00.049 "data_offset": 2048, 00:21:00.049 "data_size": 63488 00:21:00.049 }, 00:21:00.049 { 00:21:00.049 "name": null, 00:21:00.049 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:00.049 "is_configured": false, 00:21:00.049 "data_offset": 2048, 00:21:00.049 "data_size": 63488 00:21:00.049 } 00:21:00.049 ] 00:21:00.049 }' 00:21:00.049 14:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.049 14:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.625 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.625 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:00.883 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:00.883 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:01.141 [2024-07-25 14:03:49.966851] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.141 14:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.399 14:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.399 "name": "Existed_Raid", 00:21:01.399 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:01.399 "strip_size_kb": 64, 00:21:01.399 "state": "configuring", 00:21:01.399 "raid_level": "concat", 00:21:01.399 "superblock": true, 00:21:01.399 "num_base_bdevs": 3, 00:21:01.399 "num_base_bdevs_discovered": 2, 00:21:01.399 "num_base_bdevs_operational": 3, 00:21:01.399 "base_bdevs_list": [ 00:21:01.399 { 00:21:01.399 "name": "BaseBdev1", 00:21:01.399 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:01.399 "is_configured": true, 00:21:01.399 "data_offset": 2048, 00:21:01.399 "data_size": 63488 00:21:01.399 }, 00:21:01.400 { 00:21:01.400 "name": null, 00:21:01.400 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:01.400 "is_configured": false, 00:21:01.400 "data_offset": 2048, 00:21:01.400 "data_size": 63488 00:21:01.400 }, 00:21:01.400 { 00:21:01.400 "name": "BaseBdev3", 00:21:01.400 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:01.400 "is_configured": true, 00:21:01.400 "data_offset": 2048, 00:21:01.400 "data_size": 63488 00:21:01.400 } 00:21:01.400 ] 00:21:01.400 }' 00:21:01.400 14:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.400 14:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.967 14:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.967 14:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:02.224 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:02.224 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:02.482 [2024-07-25 14:03:51.415198] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.483 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.741 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.741 "name": "Existed_Raid", 00:21:02.741 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:02.741 "strip_size_kb": 64, 00:21:02.741 "state": "configuring", 00:21:02.741 "raid_level": "concat", 00:21:02.741 "superblock": true, 00:21:02.741 "num_base_bdevs": 3, 00:21:02.741 "num_base_bdevs_discovered": 1, 00:21:02.741 "num_base_bdevs_operational": 3, 00:21:02.741 "base_bdevs_list": [ 00:21:02.741 { 00:21:02.741 "name": null, 00:21:02.741 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:02.741 "is_configured": false, 00:21:02.741 "data_offset": 2048, 00:21:02.741 "data_size": 63488 00:21:02.741 }, 00:21:02.741 { 00:21:02.741 "name": null, 00:21:02.741 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:02.741 "is_configured": false, 00:21:02.741 "data_offset": 2048, 00:21:02.741 "data_size": 63488 00:21:02.741 }, 00:21:02.741 { 00:21:02.741 "name": "BaseBdev3", 00:21:02.741 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:02.741 "is_configured": true, 00:21:02.741 "data_offset": 2048, 00:21:02.741 "data_size": 63488 00:21:02.741 } 00:21:02.741 ] 00:21:02.741 }' 00:21:02.741 14:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.741 14:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.675 14:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:03.675 14:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.933 14:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:03.933 14:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:04.191 [2024-07-25 14:03:53.019843] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.191 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.449 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.449 "name": "Existed_Raid", 00:21:04.449 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:04.449 "strip_size_kb": 64, 00:21:04.449 "state": "configuring", 00:21:04.449 "raid_level": "concat", 00:21:04.449 "superblock": true, 00:21:04.449 "num_base_bdevs": 3, 00:21:04.449 "num_base_bdevs_discovered": 2, 00:21:04.449 "num_base_bdevs_operational": 3, 00:21:04.449 "base_bdevs_list": [ 00:21:04.449 { 00:21:04.449 "name": null, 00:21:04.449 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:04.449 "is_configured": false, 00:21:04.449 "data_offset": 2048, 00:21:04.449 "data_size": 63488 00:21:04.449 }, 00:21:04.449 { 00:21:04.449 "name": "BaseBdev2", 00:21:04.449 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:04.449 "is_configured": true, 00:21:04.449 "data_offset": 2048, 00:21:04.449 "data_size": 63488 00:21:04.449 }, 00:21:04.449 { 00:21:04.449 "name": "BaseBdev3", 00:21:04.449 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:04.449 "is_configured": true, 00:21:04.449 "data_offset": 2048, 00:21:04.449 "data_size": 63488 00:21:04.449 } 00:21:04.449 ] 00:21:04.449 }' 00:21:04.449 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.449 14:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.013 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.013 14:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:05.270 14:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:05.270 14:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.270 14:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:05.528 14:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c3442eda-1926-467e-acbd-382ca6d43065 00:21:06.091 [2024-07-25 14:03:54.863183] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:06.091 [2024-07-25 14:03:54.863653] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:06.091 [2024-07-25 14:03:54.863784] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:06.091 [2024-07-25 14:03:54.863951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:06.091 [2024-07-25 14:03:54.864344] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:06.091 [2024-07-25 14:03:54.864472] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:21:06.091 [2024-07-25 14:03:54.864720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.091 NewBaseBdev 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:06.091 14:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:06.349 14:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:06.607 [ 00:21:06.607 { 00:21:06.607 "name": "NewBaseBdev", 00:21:06.607 "aliases": [ 00:21:06.607 "c3442eda-1926-467e-acbd-382ca6d43065" 00:21:06.607 ], 00:21:06.607 "product_name": "Malloc disk", 00:21:06.607 "block_size": 512, 00:21:06.607 "num_blocks": 65536, 00:21:06.607 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:06.607 "assigned_rate_limits": { 00:21:06.607 "rw_ios_per_sec": 0, 00:21:06.607 "rw_mbytes_per_sec": 0, 00:21:06.607 "r_mbytes_per_sec": 0, 00:21:06.607 "w_mbytes_per_sec": 0 00:21:06.607 }, 00:21:06.607 "claimed": true, 00:21:06.607 "claim_type": "exclusive_write", 00:21:06.607 "zoned": false, 00:21:06.607 "supported_io_types": { 00:21:06.607 "read": true, 00:21:06.607 "write": true, 00:21:06.607 "unmap": true, 00:21:06.607 "flush": true, 00:21:06.607 "reset": true, 00:21:06.607 "nvme_admin": false, 00:21:06.607 "nvme_io": false, 00:21:06.607 "nvme_io_md": false, 00:21:06.607 "write_zeroes": true, 00:21:06.607 "zcopy": true, 00:21:06.607 "get_zone_info": false, 00:21:06.607 "zone_management": false, 00:21:06.607 "zone_append": false, 00:21:06.607 "compare": false, 00:21:06.607 "compare_and_write": false, 00:21:06.607 "abort": true, 00:21:06.607 "seek_hole": false, 00:21:06.607 "seek_data": false, 00:21:06.607 "copy": true, 00:21:06.607 "nvme_iov_md": false 00:21:06.607 }, 00:21:06.607 "memory_domains": [ 00:21:06.607 { 00:21:06.607 "dma_device_id": "system", 00:21:06.607 "dma_device_type": 1 00:21:06.607 }, 00:21:06.607 { 00:21:06.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.607 "dma_device_type": 2 00:21:06.607 } 00:21:06.607 ], 00:21:06.607 "driver_specific": {} 00:21:06.607 } 00:21:06.607 ] 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.607 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.864 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.864 "name": "Existed_Raid", 00:21:06.865 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:06.865 "strip_size_kb": 64, 00:21:06.865 "state": "online", 00:21:06.865 "raid_level": "concat", 00:21:06.865 "superblock": true, 00:21:06.865 "num_base_bdevs": 3, 00:21:06.865 "num_base_bdevs_discovered": 3, 00:21:06.865 "num_base_bdevs_operational": 3, 00:21:06.865 "base_bdevs_list": [ 00:21:06.865 { 00:21:06.865 "name": "NewBaseBdev", 00:21:06.865 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:06.865 "is_configured": true, 00:21:06.865 "data_offset": 2048, 00:21:06.865 "data_size": 63488 00:21:06.865 }, 00:21:06.865 { 00:21:06.865 "name": "BaseBdev2", 00:21:06.865 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:06.865 "is_configured": true, 00:21:06.865 "data_offset": 2048, 00:21:06.865 "data_size": 63488 00:21:06.865 }, 00:21:06.865 { 00:21:06.865 "name": "BaseBdev3", 00:21:06.865 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:06.865 "is_configured": true, 00:21:06.865 "data_offset": 2048, 00:21:06.865 "data_size": 63488 00:21:06.865 } 00:21:06.865 ] 00:21:06.865 }' 00:21:06.865 14:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.865 14:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.434 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:07.435 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:07.693 [2024-07-25 14:03:56.675958] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.693 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:07.693 "name": "Existed_Raid", 00:21:07.693 "aliases": [ 00:21:07.693 "ff5c0731-79ec-443b-a887-41986a933152" 00:21:07.693 ], 00:21:07.693 "product_name": "Raid Volume", 00:21:07.693 "block_size": 512, 00:21:07.693 "num_blocks": 190464, 00:21:07.693 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:07.693 "assigned_rate_limits": { 00:21:07.693 "rw_ios_per_sec": 0, 00:21:07.693 "rw_mbytes_per_sec": 0, 00:21:07.693 "r_mbytes_per_sec": 0, 00:21:07.693 "w_mbytes_per_sec": 0 00:21:07.693 }, 00:21:07.693 "claimed": false, 00:21:07.693 "zoned": false, 00:21:07.693 "supported_io_types": { 00:21:07.693 "read": true, 00:21:07.693 "write": true, 00:21:07.693 "unmap": true, 00:21:07.693 "flush": true, 00:21:07.693 "reset": true, 00:21:07.693 "nvme_admin": false, 00:21:07.693 "nvme_io": false, 00:21:07.693 "nvme_io_md": false, 00:21:07.693 "write_zeroes": true, 00:21:07.693 "zcopy": false, 00:21:07.693 "get_zone_info": false, 00:21:07.693 "zone_management": false, 00:21:07.693 "zone_append": false, 00:21:07.693 "compare": false, 00:21:07.693 "compare_and_write": false, 00:21:07.693 "abort": false, 00:21:07.693 "seek_hole": false, 00:21:07.693 "seek_data": false, 00:21:07.693 "copy": false, 00:21:07.693 "nvme_iov_md": false 00:21:07.693 }, 00:21:07.693 "memory_domains": [ 00:21:07.693 { 00:21:07.693 "dma_device_id": "system", 00:21:07.693 "dma_device_type": 1 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.693 "dma_device_type": 2 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "dma_device_id": "system", 00:21:07.693 "dma_device_type": 1 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.693 "dma_device_type": 2 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "dma_device_id": "system", 00:21:07.693 "dma_device_type": 1 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.693 "dma_device_type": 2 00:21:07.693 } 00:21:07.693 ], 00:21:07.693 "driver_specific": { 00:21:07.693 "raid": { 00:21:07.693 "uuid": "ff5c0731-79ec-443b-a887-41986a933152", 00:21:07.693 "strip_size_kb": 64, 00:21:07.693 "state": "online", 00:21:07.693 "raid_level": "concat", 00:21:07.693 "superblock": true, 00:21:07.693 "num_base_bdevs": 3, 00:21:07.693 "num_base_bdevs_discovered": 3, 00:21:07.693 "num_base_bdevs_operational": 3, 00:21:07.693 "base_bdevs_list": [ 00:21:07.693 { 00:21:07.693 "name": "NewBaseBdev", 00:21:07.693 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:07.693 "is_configured": true, 00:21:07.693 "data_offset": 2048, 00:21:07.693 "data_size": 63488 00:21:07.693 }, 00:21:07.693 { 00:21:07.693 "name": "BaseBdev2", 00:21:07.693 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:07.693 "is_configured": true, 00:21:07.693 "data_offset": 2048, 00:21:07.693 "data_size": 63488 00:21:07.693 }, 00:21:07.694 { 00:21:07.694 "name": "BaseBdev3", 00:21:07.694 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:07.694 "is_configured": true, 00:21:07.694 "data_offset": 2048, 00:21:07.694 "data_size": 63488 00:21:07.694 } 00:21:07.694 ] 00:21:07.694 } 00:21:07.694 } 00:21:07.694 }' 00:21:07.694 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:07.952 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:07.952 BaseBdev2 00:21:07.952 BaseBdev3' 00:21:07.952 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:07.952 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:07.952 14:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.210 "name": "NewBaseBdev", 00:21:08.210 "aliases": [ 00:21:08.210 "c3442eda-1926-467e-acbd-382ca6d43065" 00:21:08.210 ], 00:21:08.210 "product_name": "Malloc disk", 00:21:08.210 "block_size": 512, 00:21:08.210 "num_blocks": 65536, 00:21:08.210 "uuid": "c3442eda-1926-467e-acbd-382ca6d43065", 00:21:08.210 "assigned_rate_limits": { 00:21:08.210 "rw_ios_per_sec": 0, 00:21:08.210 "rw_mbytes_per_sec": 0, 00:21:08.210 "r_mbytes_per_sec": 0, 00:21:08.210 "w_mbytes_per_sec": 0 00:21:08.210 }, 00:21:08.210 "claimed": true, 00:21:08.210 "claim_type": "exclusive_write", 00:21:08.210 "zoned": false, 00:21:08.210 "supported_io_types": { 00:21:08.210 "read": true, 00:21:08.210 "write": true, 00:21:08.210 "unmap": true, 00:21:08.210 "flush": true, 00:21:08.210 "reset": true, 00:21:08.210 "nvme_admin": false, 00:21:08.210 "nvme_io": false, 00:21:08.210 "nvme_io_md": false, 00:21:08.210 "write_zeroes": true, 00:21:08.210 "zcopy": true, 00:21:08.210 "get_zone_info": false, 00:21:08.210 "zone_management": false, 00:21:08.210 "zone_append": false, 00:21:08.210 "compare": false, 00:21:08.210 "compare_and_write": false, 00:21:08.210 "abort": true, 00:21:08.210 "seek_hole": false, 00:21:08.210 "seek_data": false, 00:21:08.210 "copy": true, 00:21:08.210 "nvme_iov_md": false 00:21:08.210 }, 00:21:08.210 "memory_domains": [ 00:21:08.210 { 00:21:08.210 "dma_device_id": "system", 00:21:08.210 "dma_device_type": 1 00:21:08.210 }, 00:21:08.210 { 00:21:08.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.210 "dma_device_type": 2 00:21:08.210 } 00:21:08.210 ], 00:21:08.210 "driver_specific": {} 00:21:08.210 }' 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:08.210 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.468 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:08.726 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.726 "name": "BaseBdev2", 00:21:08.726 "aliases": [ 00:21:08.726 "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1" 00:21:08.726 ], 00:21:08.726 "product_name": "Malloc disk", 00:21:08.726 "block_size": 512, 00:21:08.726 "num_blocks": 65536, 00:21:08.726 "uuid": "2d14b00d-eded-4b4a-8c7f-ae1a166a9fc1", 00:21:08.726 "assigned_rate_limits": { 00:21:08.726 "rw_ios_per_sec": 0, 00:21:08.726 "rw_mbytes_per_sec": 0, 00:21:08.726 "r_mbytes_per_sec": 0, 00:21:08.726 "w_mbytes_per_sec": 0 00:21:08.726 }, 00:21:08.726 "claimed": true, 00:21:08.726 "claim_type": "exclusive_write", 00:21:08.726 "zoned": false, 00:21:08.726 "supported_io_types": { 00:21:08.726 "read": true, 00:21:08.726 "write": true, 00:21:08.726 "unmap": true, 00:21:08.726 "flush": true, 00:21:08.726 "reset": true, 00:21:08.726 "nvme_admin": false, 00:21:08.726 "nvme_io": false, 00:21:08.726 "nvme_io_md": false, 00:21:08.726 "write_zeroes": true, 00:21:08.726 "zcopy": true, 00:21:08.726 "get_zone_info": false, 00:21:08.726 "zone_management": false, 00:21:08.726 "zone_append": false, 00:21:08.726 "compare": false, 00:21:08.726 "compare_and_write": false, 00:21:08.726 "abort": true, 00:21:08.726 "seek_hole": false, 00:21:08.726 "seek_data": false, 00:21:08.726 "copy": true, 00:21:08.726 "nvme_iov_md": false 00:21:08.726 }, 00:21:08.726 "memory_domains": [ 00:21:08.726 { 00:21:08.726 "dma_device_id": "system", 00:21:08.726 "dma_device_type": 1 00:21:08.726 }, 00:21:08.726 { 00:21:08.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.726 "dma_device_type": 2 00:21:08.726 } 00:21:08.726 ], 00:21:08.726 "driver_specific": {} 00:21:08.726 }' 00:21:08.726 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.726 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.984 14:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.984 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:08.984 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.241 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.241 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.241 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.241 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:09.241 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.500 "name": "BaseBdev3", 00:21:09.500 "aliases": [ 00:21:09.500 "c9cb8a75-e751-4728-af45-5bb38cb61bd4" 00:21:09.500 ], 00:21:09.500 "product_name": "Malloc disk", 00:21:09.500 "block_size": 512, 00:21:09.500 "num_blocks": 65536, 00:21:09.500 "uuid": "c9cb8a75-e751-4728-af45-5bb38cb61bd4", 00:21:09.500 "assigned_rate_limits": { 00:21:09.500 "rw_ios_per_sec": 0, 00:21:09.500 "rw_mbytes_per_sec": 0, 00:21:09.500 "r_mbytes_per_sec": 0, 00:21:09.500 "w_mbytes_per_sec": 0 00:21:09.500 }, 00:21:09.500 "claimed": true, 00:21:09.500 "claim_type": "exclusive_write", 00:21:09.500 "zoned": false, 00:21:09.500 "supported_io_types": { 00:21:09.500 "read": true, 00:21:09.500 "write": true, 00:21:09.500 "unmap": true, 00:21:09.500 "flush": true, 00:21:09.500 "reset": true, 00:21:09.500 "nvme_admin": false, 00:21:09.500 "nvme_io": false, 00:21:09.500 "nvme_io_md": false, 00:21:09.500 "write_zeroes": true, 00:21:09.500 "zcopy": true, 00:21:09.500 "get_zone_info": false, 00:21:09.500 "zone_management": false, 00:21:09.500 "zone_append": false, 00:21:09.500 "compare": false, 00:21:09.500 "compare_and_write": false, 00:21:09.500 "abort": true, 00:21:09.500 "seek_hole": false, 00:21:09.500 "seek_data": false, 00:21:09.500 "copy": true, 00:21:09.500 "nvme_iov_md": false 00:21:09.500 }, 00:21:09.500 "memory_domains": [ 00:21:09.500 { 00:21:09.500 "dma_device_id": "system", 00:21:09.500 "dma_device_type": 1 00:21:09.500 }, 00:21:09.500 { 00:21:09.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.500 "dma_device_type": 2 00:21:09.500 } 00:21:09.500 ], 00:21:09.500 "driver_specific": {} 00:21:09.500 }' 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.500 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.759 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:10.017 [2024-07-25 14:03:58.972095] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:10.017 [2024-07-25 14:03:58.972322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:10.017 [2024-07-25 14:03:58.972533] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:10.017 [2024-07-25 14:03:58.972738] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:10.017 [2024-07-25 14:03:58.972860] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 128971 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 128971 ']' 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 128971 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.017 14:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128971 00:21:10.017 14:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.017 14:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.017 14:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128971' 00:21:10.017 killing process with pid 128971 00:21:10.017 14:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 128971 00:21:10.017 [2024-07-25 14:03:59.018052] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:10.017 14:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 128971 00:21:10.274 [2024-07-25 14:03:59.267857] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:11.650 ************************************ 00:21:11.650 END TEST raid_state_function_test_sb 00:21:11.650 ************************************ 00:21:11.650 14:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:11.650 00:21:11.650 real 0m33.704s 00:21:11.650 user 1m2.878s 00:21:11.650 sys 0m3.745s 00:21:11.650 14:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.650 14:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.650 14:04:00 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:21:11.650 14:04:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:11.650 14:04:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.650 14:04:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:11.650 ************************************ 00:21:11.650 START TEST raid_superblock_test 00:21:11.650 ************************************ 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=129993 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 129993 /var/tmp/spdk-raid.sock 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 129993 ']' 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:11.650 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.651 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:11.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:11.651 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.651 14:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.651 [2024-07-25 14:04:00.521257] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:11.651 [2024-07-25 14:04:00.521652] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129993 ] 00:21:11.651 [2024-07-25 14:04:00.679199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.910 [2024-07-25 14:04:00.944727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.167 [2024-07-25 14:04:01.143600] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:12.734 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:12.992 malloc1 00:21:12.992 14:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:13.249 [2024-07-25 14:04:02.152382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:13.249 [2024-07-25 14:04:02.152776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.249 [2024-07-25 14:04:02.152976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:13.249 [2024-07-25 14:04:02.153143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.249 [2024-07-25 14:04:02.155887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.249 [2024-07-25 14:04:02.156073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:13.249 pt1 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:13.249 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:13.510 malloc2 00:21:13.510 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:13.791 [2024-07-25 14:04:02.740450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:13.791 [2024-07-25 14:04:02.740923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.791 [2024-07-25 14:04:02.741121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:13.791 [2024-07-25 14:04:02.741294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.791 [2024-07-25 14:04:02.744143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.791 [2024-07-25 14:04:02.744390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:13.791 pt2 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:13.791 14:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:14.049 malloc3 00:21:14.049 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:14.307 [2024-07-25 14:04:03.279999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:14.307 [2024-07-25 14:04:03.280377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.307 [2024-07-25 14:04:03.280555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:14.307 [2024-07-25 14:04:03.280688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.307 [2024-07-25 14:04:03.283359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.307 [2024-07-25 14:04:03.283547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:14.307 pt3 00:21:14.307 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:14.307 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:14.307 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:14.566 [2024-07-25 14:04:03.524104] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:14.566 [2024-07-25 14:04:03.526691] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.566 [2024-07-25 14:04:03.526941] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:14.566 [2024-07-25 14:04:03.527271] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:21:14.566 [2024-07-25 14:04:03.527392] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:14.566 [2024-07-25 14:04:03.527578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:14.566 [2024-07-25 14:04:03.528034] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:21:14.566 [2024-07-25 14:04:03.528167] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:21:14.566 [2024-07-25 14:04:03.528544] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.566 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.825 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.825 "name": "raid_bdev1", 00:21:14.825 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:14.825 "strip_size_kb": 64, 00:21:14.825 "state": "online", 00:21:14.825 "raid_level": "concat", 00:21:14.825 "superblock": true, 00:21:14.825 "num_base_bdevs": 3, 00:21:14.825 "num_base_bdevs_discovered": 3, 00:21:14.825 "num_base_bdevs_operational": 3, 00:21:14.825 "base_bdevs_list": [ 00:21:14.825 { 00:21:14.825 "name": "pt1", 00:21:14.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.825 "is_configured": true, 00:21:14.825 "data_offset": 2048, 00:21:14.825 "data_size": 63488 00:21:14.825 }, 00:21:14.825 { 00:21:14.825 "name": "pt2", 00:21:14.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.825 "is_configured": true, 00:21:14.825 "data_offset": 2048, 00:21:14.825 "data_size": 63488 00:21:14.825 }, 00:21:14.825 { 00:21:14.825 "name": "pt3", 00:21:14.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.825 "is_configured": true, 00:21:14.825 "data_offset": 2048, 00:21:14.825 "data_size": 63488 00:21:14.825 } 00:21:14.825 ] 00:21:14.825 }' 00:21:14.825 14:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.825 14:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:15.759 [2024-07-25 14:04:04.661043] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:15.759 "name": "raid_bdev1", 00:21:15.759 "aliases": [ 00:21:15.759 "4daf27b4-1a1d-410f-a467-6b8dc8263095" 00:21:15.759 ], 00:21:15.759 "product_name": "Raid Volume", 00:21:15.759 "block_size": 512, 00:21:15.759 "num_blocks": 190464, 00:21:15.759 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:15.759 "assigned_rate_limits": { 00:21:15.759 "rw_ios_per_sec": 0, 00:21:15.759 "rw_mbytes_per_sec": 0, 00:21:15.759 "r_mbytes_per_sec": 0, 00:21:15.759 "w_mbytes_per_sec": 0 00:21:15.759 }, 00:21:15.759 "claimed": false, 00:21:15.759 "zoned": false, 00:21:15.759 "supported_io_types": { 00:21:15.759 "read": true, 00:21:15.759 "write": true, 00:21:15.759 "unmap": true, 00:21:15.759 "flush": true, 00:21:15.759 "reset": true, 00:21:15.759 "nvme_admin": false, 00:21:15.759 "nvme_io": false, 00:21:15.759 "nvme_io_md": false, 00:21:15.759 "write_zeroes": true, 00:21:15.759 "zcopy": false, 00:21:15.759 "get_zone_info": false, 00:21:15.759 "zone_management": false, 00:21:15.759 "zone_append": false, 00:21:15.759 "compare": false, 00:21:15.759 "compare_and_write": false, 00:21:15.759 "abort": false, 00:21:15.759 "seek_hole": false, 00:21:15.759 "seek_data": false, 00:21:15.759 "copy": false, 00:21:15.759 "nvme_iov_md": false 00:21:15.759 }, 00:21:15.759 "memory_domains": [ 00:21:15.759 { 00:21:15.759 "dma_device_id": "system", 00:21:15.759 "dma_device_type": 1 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.759 "dma_device_type": 2 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "dma_device_id": "system", 00:21:15.759 "dma_device_type": 1 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.759 "dma_device_type": 2 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "dma_device_id": "system", 00:21:15.759 "dma_device_type": 1 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.759 "dma_device_type": 2 00:21:15.759 } 00:21:15.759 ], 00:21:15.759 "driver_specific": { 00:21:15.759 "raid": { 00:21:15.759 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:15.759 "strip_size_kb": 64, 00:21:15.759 "state": "online", 00:21:15.759 "raid_level": "concat", 00:21:15.759 "superblock": true, 00:21:15.759 "num_base_bdevs": 3, 00:21:15.759 "num_base_bdevs_discovered": 3, 00:21:15.759 "num_base_bdevs_operational": 3, 00:21:15.759 "base_bdevs_list": [ 00:21:15.759 { 00:21:15.759 "name": "pt1", 00:21:15.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.759 "is_configured": true, 00:21:15.759 "data_offset": 2048, 00:21:15.759 "data_size": 63488 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "name": "pt2", 00:21:15.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.759 "is_configured": true, 00:21:15.759 "data_offset": 2048, 00:21:15.759 "data_size": 63488 00:21:15.759 }, 00:21:15.759 { 00:21:15.759 "name": "pt3", 00:21:15.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.759 "is_configured": true, 00:21:15.759 "data_offset": 2048, 00:21:15.759 "data_size": 63488 00:21:15.759 } 00:21:15.759 ] 00:21:15.759 } 00:21:15.759 } 00:21:15.759 }' 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:15.759 pt2 00:21:15.759 pt3' 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.759 14:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:16.018 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.018 "name": "pt1", 00:21:16.018 "aliases": [ 00:21:16.018 "00000000-0000-0000-0000-000000000001" 00:21:16.018 ], 00:21:16.018 "product_name": "passthru", 00:21:16.018 "block_size": 512, 00:21:16.018 "num_blocks": 65536, 00:21:16.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:16.018 "assigned_rate_limits": { 00:21:16.018 "rw_ios_per_sec": 0, 00:21:16.018 "rw_mbytes_per_sec": 0, 00:21:16.018 "r_mbytes_per_sec": 0, 00:21:16.018 "w_mbytes_per_sec": 0 00:21:16.018 }, 00:21:16.018 "claimed": true, 00:21:16.018 "claim_type": "exclusive_write", 00:21:16.018 "zoned": false, 00:21:16.018 "supported_io_types": { 00:21:16.018 "read": true, 00:21:16.018 "write": true, 00:21:16.018 "unmap": true, 00:21:16.018 "flush": true, 00:21:16.018 "reset": true, 00:21:16.018 "nvme_admin": false, 00:21:16.018 "nvme_io": false, 00:21:16.018 "nvme_io_md": false, 00:21:16.018 "write_zeroes": true, 00:21:16.018 "zcopy": true, 00:21:16.018 "get_zone_info": false, 00:21:16.018 "zone_management": false, 00:21:16.018 "zone_append": false, 00:21:16.018 "compare": false, 00:21:16.018 "compare_and_write": false, 00:21:16.019 "abort": true, 00:21:16.019 "seek_hole": false, 00:21:16.019 "seek_data": false, 00:21:16.019 "copy": true, 00:21:16.019 "nvme_iov_md": false 00:21:16.019 }, 00:21:16.019 "memory_domains": [ 00:21:16.019 { 00:21:16.019 "dma_device_id": "system", 00:21:16.019 "dma_device_type": 1 00:21:16.019 }, 00:21:16.019 { 00:21:16.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.019 "dma_device_type": 2 00:21:16.019 } 00:21:16.019 ], 00:21:16.019 "driver_specific": { 00:21:16.019 "passthru": { 00:21:16.019 "name": "pt1", 00:21:16.019 "base_bdev_name": "malloc1" 00:21:16.019 } 00:21:16.019 } 00:21:16.019 }' 00:21:16.019 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.019 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:16.277 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.536 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.536 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.536 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:16.536 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:16.536 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:16.795 "name": "pt2", 00:21:16.795 "aliases": [ 00:21:16.795 "00000000-0000-0000-0000-000000000002" 00:21:16.795 ], 00:21:16.795 "product_name": "passthru", 00:21:16.795 "block_size": 512, 00:21:16.795 "num_blocks": 65536, 00:21:16.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.795 "assigned_rate_limits": { 00:21:16.795 "rw_ios_per_sec": 0, 00:21:16.795 "rw_mbytes_per_sec": 0, 00:21:16.795 "r_mbytes_per_sec": 0, 00:21:16.795 "w_mbytes_per_sec": 0 00:21:16.795 }, 00:21:16.795 "claimed": true, 00:21:16.795 "claim_type": "exclusive_write", 00:21:16.795 "zoned": false, 00:21:16.795 "supported_io_types": { 00:21:16.795 "read": true, 00:21:16.795 "write": true, 00:21:16.795 "unmap": true, 00:21:16.795 "flush": true, 00:21:16.795 "reset": true, 00:21:16.795 "nvme_admin": false, 00:21:16.795 "nvme_io": false, 00:21:16.795 "nvme_io_md": false, 00:21:16.795 "write_zeroes": true, 00:21:16.795 "zcopy": true, 00:21:16.795 "get_zone_info": false, 00:21:16.795 "zone_management": false, 00:21:16.795 "zone_append": false, 00:21:16.795 "compare": false, 00:21:16.795 "compare_and_write": false, 00:21:16.795 "abort": true, 00:21:16.795 "seek_hole": false, 00:21:16.795 "seek_data": false, 00:21:16.795 "copy": true, 00:21:16.795 "nvme_iov_md": false 00:21:16.795 }, 00:21:16.795 "memory_domains": [ 00:21:16.795 { 00:21:16.795 "dma_device_id": "system", 00:21:16.795 "dma_device_type": 1 00:21:16.795 }, 00:21:16.795 { 00:21:16.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.795 "dma_device_type": 2 00:21:16.795 } 00:21:16.795 ], 00:21:16.795 "driver_specific": { 00:21:16.795 "passthru": { 00:21:16.795 "name": "pt2", 00:21:16.795 "base_bdev_name": "malloc2" 00:21:16.795 } 00:21:16.795 } 00:21:16.795 }' 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:16.795 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.061 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.061 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.061 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.061 14:04:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.061 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.061 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:17.061 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:17.061 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:17.319 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:17.319 "name": "pt3", 00:21:17.319 "aliases": [ 00:21:17.319 "00000000-0000-0000-0000-000000000003" 00:21:17.319 ], 00:21:17.319 "product_name": "passthru", 00:21:17.319 "block_size": 512, 00:21:17.319 "num_blocks": 65536, 00:21:17.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.319 "assigned_rate_limits": { 00:21:17.319 "rw_ios_per_sec": 0, 00:21:17.319 "rw_mbytes_per_sec": 0, 00:21:17.319 "r_mbytes_per_sec": 0, 00:21:17.319 "w_mbytes_per_sec": 0 00:21:17.319 }, 00:21:17.319 "claimed": true, 00:21:17.319 "claim_type": "exclusive_write", 00:21:17.319 "zoned": false, 00:21:17.319 "supported_io_types": { 00:21:17.319 "read": true, 00:21:17.319 "write": true, 00:21:17.319 "unmap": true, 00:21:17.319 "flush": true, 00:21:17.319 "reset": true, 00:21:17.319 "nvme_admin": false, 00:21:17.319 "nvme_io": false, 00:21:17.319 "nvme_io_md": false, 00:21:17.319 "write_zeroes": true, 00:21:17.319 "zcopy": true, 00:21:17.319 "get_zone_info": false, 00:21:17.319 "zone_management": false, 00:21:17.319 "zone_append": false, 00:21:17.319 "compare": false, 00:21:17.319 "compare_and_write": false, 00:21:17.319 "abort": true, 00:21:17.319 "seek_hole": false, 00:21:17.319 "seek_data": false, 00:21:17.319 "copy": true, 00:21:17.319 "nvme_iov_md": false 00:21:17.319 }, 00:21:17.319 "memory_domains": [ 00:21:17.319 { 00:21:17.319 "dma_device_id": "system", 00:21:17.319 "dma_device_type": 1 00:21:17.319 }, 00:21:17.319 { 00:21:17.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.319 "dma_device_type": 2 00:21:17.319 } 00:21:17.319 ], 00:21:17.319 "driver_specific": { 00:21:17.319 "passthru": { 00:21:17.319 "name": "pt3", 00:21:17.319 "base_bdev_name": "malloc3" 00:21:17.319 } 00:21:17.319 } 00:21:17.319 }' 00:21:17.319 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.319 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:17.319 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:17.319 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.577 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:17.836 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:17.836 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:17.836 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:18.095 [2024-07-25 14:04:06.921444] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.095 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=4daf27b4-1a1d-410f-a467-6b8dc8263095 00:21:18.095 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 4daf27b4-1a1d-410f-a467-6b8dc8263095 ']' 00:21:18.095 14:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:18.353 [2024-07-25 14:04:07.217219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.353 [2024-07-25 14:04:07.217475] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.353 [2024-07-25 14:04:07.217685] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.353 [2024-07-25 14:04:07.217908] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.353 [2024-07-25 14:04:07.218040] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:21:18.353 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.353 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:18.612 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:18.612 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:18.612 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.612 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:18.869 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:18.869 14:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:19.127 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.127 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:19.385 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:19.385 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:19.643 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:19.901 [2024-07-25 14:04:08.858289] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:19.901 [2024-07-25 14:04:08.861327] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:19.901 [2024-07-25 14:04:08.861657] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:19.901 [2024-07-25 14:04:08.861942] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:19.901 [2024-07-25 14:04:08.862226] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:19.901 [2024-07-25 14:04:08.862458] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:19.901 [2024-07-25 14:04:08.862654] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.901 [2024-07-25 14:04:08.862790] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:21:19.901 request: 00:21:19.901 { 00:21:19.901 "name": "raid_bdev1", 00:21:19.901 "raid_level": "concat", 00:21:19.901 "base_bdevs": [ 00:21:19.901 "malloc1", 00:21:19.901 "malloc2", 00:21:19.901 "malloc3" 00:21:19.901 ], 00:21:19.901 "strip_size_kb": 64, 00:21:19.901 "superblock": false, 00:21:19.901 "method": "bdev_raid_create", 00:21:19.901 "req_id": 1 00:21:19.901 } 00:21:19.901 Got JSON-RPC error response 00:21:19.901 response: 00:21:19.901 { 00:21:19.901 "code": -17, 00:21:19.901 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:19.901 } 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.901 14:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:21:20.159 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:21:20.159 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:21:20.159 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:20.435 [2024-07-25 14:04:09.391237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:20.435 [2024-07-25 14:04:09.391626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.435 [2024-07-25 14:04:09.391713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:20.435 [2024-07-25 14:04:09.391971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.435 [2024-07-25 14:04:09.394812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.435 [2024-07-25 14:04:09.394993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:20.435 [2024-07-25 14:04:09.395234] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:20.435 [2024-07-25 14:04:09.395405] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:20.435 pt1 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.435 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.722 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.722 "name": "raid_bdev1", 00:21:20.722 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:20.722 "strip_size_kb": 64, 00:21:20.722 "state": "configuring", 00:21:20.722 "raid_level": "concat", 00:21:20.722 "superblock": true, 00:21:20.722 "num_base_bdevs": 3, 00:21:20.722 "num_base_bdevs_discovered": 1, 00:21:20.722 "num_base_bdevs_operational": 3, 00:21:20.722 "base_bdevs_list": [ 00:21:20.722 { 00:21:20.722 "name": "pt1", 00:21:20.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.722 "is_configured": true, 00:21:20.722 "data_offset": 2048, 00:21:20.722 "data_size": 63488 00:21:20.722 }, 00:21:20.722 { 00:21:20.722 "name": null, 00:21:20.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.722 "is_configured": false, 00:21:20.722 "data_offset": 2048, 00:21:20.722 "data_size": 63488 00:21:20.722 }, 00:21:20.722 { 00:21:20.722 "name": null, 00:21:20.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:20.722 "is_configured": false, 00:21:20.722 "data_offset": 2048, 00:21:20.722 "data_size": 63488 00:21:20.722 } 00:21:20.722 ] 00:21:20.722 }' 00:21:20.722 14:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.722 14:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.655 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:21:21.656 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.656 [2024-07-25 14:04:10.595560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.656 [2024-07-25 14:04:10.595876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.656 [2024-07-25 14:04:10.596043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:21.656 [2024-07-25 14:04:10.596178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.656 [2024-07-25 14:04:10.596773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.656 [2024-07-25 14:04:10.596940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.656 [2024-07-25 14:04:10.597164] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:21.656 [2024-07-25 14:04:10.597320] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.656 pt2 00:21:21.656 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:21.914 [2024-07-25 14:04:10.875664] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.914 14:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.172 14:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.172 "name": "raid_bdev1", 00:21:22.172 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:22.172 "strip_size_kb": 64, 00:21:22.172 "state": "configuring", 00:21:22.172 "raid_level": "concat", 00:21:22.172 "superblock": true, 00:21:22.172 "num_base_bdevs": 3, 00:21:22.172 "num_base_bdevs_discovered": 1, 00:21:22.172 "num_base_bdevs_operational": 3, 00:21:22.172 "base_bdevs_list": [ 00:21:22.172 { 00:21:22.172 "name": "pt1", 00:21:22.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.172 "is_configured": true, 00:21:22.172 "data_offset": 2048, 00:21:22.172 "data_size": 63488 00:21:22.172 }, 00:21:22.172 { 00:21:22.172 "name": null, 00:21:22.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.172 "is_configured": false, 00:21:22.172 "data_offset": 2048, 00:21:22.172 "data_size": 63488 00:21:22.172 }, 00:21:22.172 { 00:21:22.172 "name": null, 00:21:22.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.172 "is_configured": false, 00:21:22.172 "data_offset": 2048, 00:21:22.172 "data_size": 63488 00:21:22.172 } 00:21:22.172 ] 00:21:22.172 }' 00:21:22.172 14:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.172 14:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.106 14:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:23.106 14:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:23.106 14:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.106 [2024-07-25 14:04:12.051843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.106 [2024-07-25 14:04:12.052211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.106 [2024-07-25 14:04:12.052416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:23.106 [2024-07-25 14:04:12.052548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.106 [2024-07-25 14:04:12.053123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.106 [2024-07-25 14:04:12.053289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.106 [2024-07-25 14:04:12.053550] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.106 [2024-07-25 14:04:12.053689] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.106 pt2 00:21:23.106 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:23.106 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:23.106 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:23.364 [2024-07-25 14:04:12.335934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:23.365 [2024-07-25 14:04:12.336200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.365 [2024-07-25 14:04:12.336376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:23.365 [2024-07-25 14:04:12.336504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.365 [2024-07-25 14:04:12.337178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.365 [2024-07-25 14:04:12.337339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:23.365 [2024-07-25 14:04:12.337590] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:23.365 [2024-07-25 14:04:12.337728] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:23.365 [2024-07-25 14:04:12.338029] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:21:23.365 [2024-07-25 14:04:12.338167] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:23.365 [2024-07-25 14:04:12.338304] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:23.365 [2024-07-25 14:04:12.338699] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:21:23.365 [2024-07-25 14:04:12.338824] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:21:23.365 [2024-07-25 14:04:12.339076] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.365 pt3 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.365 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.623 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.623 "name": "raid_bdev1", 00:21:23.623 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:23.623 "strip_size_kb": 64, 00:21:23.623 "state": "online", 00:21:23.623 "raid_level": "concat", 00:21:23.623 "superblock": true, 00:21:23.623 "num_base_bdevs": 3, 00:21:23.623 "num_base_bdevs_discovered": 3, 00:21:23.623 "num_base_bdevs_operational": 3, 00:21:23.623 "base_bdevs_list": [ 00:21:23.623 { 00:21:23.623 "name": "pt1", 00:21:23.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.623 "is_configured": true, 00:21:23.623 "data_offset": 2048, 00:21:23.623 "data_size": 63488 00:21:23.623 }, 00:21:23.623 { 00:21:23.623 "name": "pt2", 00:21:23.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.623 "is_configured": true, 00:21:23.623 "data_offset": 2048, 00:21:23.623 "data_size": 63488 00:21:23.623 }, 00:21:23.623 { 00:21:23.623 "name": "pt3", 00:21:23.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.623 "is_configured": true, 00:21:23.623 "data_offset": 2048, 00:21:23.623 "data_size": 63488 00:21:23.623 } 00:21:23.623 ] 00:21:23.623 }' 00:21:23.623 14:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.623 14:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:24.555 [2024-07-25 14:04:13.564515] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.555 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:24.555 "name": "raid_bdev1", 00:21:24.555 "aliases": [ 00:21:24.555 "4daf27b4-1a1d-410f-a467-6b8dc8263095" 00:21:24.555 ], 00:21:24.555 "product_name": "Raid Volume", 00:21:24.555 "block_size": 512, 00:21:24.555 "num_blocks": 190464, 00:21:24.555 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:24.555 "assigned_rate_limits": { 00:21:24.555 "rw_ios_per_sec": 0, 00:21:24.555 "rw_mbytes_per_sec": 0, 00:21:24.555 "r_mbytes_per_sec": 0, 00:21:24.555 "w_mbytes_per_sec": 0 00:21:24.555 }, 00:21:24.555 "claimed": false, 00:21:24.555 "zoned": false, 00:21:24.555 "supported_io_types": { 00:21:24.555 "read": true, 00:21:24.555 "write": true, 00:21:24.555 "unmap": true, 00:21:24.555 "flush": true, 00:21:24.555 "reset": true, 00:21:24.555 "nvme_admin": false, 00:21:24.555 "nvme_io": false, 00:21:24.555 "nvme_io_md": false, 00:21:24.555 "write_zeroes": true, 00:21:24.555 "zcopy": false, 00:21:24.555 "get_zone_info": false, 00:21:24.555 "zone_management": false, 00:21:24.555 "zone_append": false, 00:21:24.555 "compare": false, 00:21:24.555 "compare_and_write": false, 00:21:24.555 "abort": false, 00:21:24.555 "seek_hole": false, 00:21:24.555 "seek_data": false, 00:21:24.555 "copy": false, 00:21:24.555 "nvme_iov_md": false 00:21:24.555 }, 00:21:24.555 "memory_domains": [ 00:21:24.555 { 00:21:24.555 "dma_device_id": "system", 00:21:24.555 "dma_device_type": 1 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.555 "dma_device_type": 2 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "dma_device_id": "system", 00:21:24.555 "dma_device_type": 1 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.555 "dma_device_type": 2 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "dma_device_id": "system", 00:21:24.555 "dma_device_type": 1 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.555 "dma_device_type": 2 00:21:24.555 } 00:21:24.555 ], 00:21:24.555 "driver_specific": { 00:21:24.555 "raid": { 00:21:24.555 "uuid": "4daf27b4-1a1d-410f-a467-6b8dc8263095", 00:21:24.555 "strip_size_kb": 64, 00:21:24.555 "state": "online", 00:21:24.555 "raid_level": "concat", 00:21:24.555 "superblock": true, 00:21:24.555 "num_base_bdevs": 3, 00:21:24.555 "num_base_bdevs_discovered": 3, 00:21:24.555 "num_base_bdevs_operational": 3, 00:21:24.555 "base_bdevs_list": [ 00:21:24.555 { 00:21:24.555 "name": "pt1", 00:21:24.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:24.555 "is_configured": true, 00:21:24.555 "data_offset": 2048, 00:21:24.555 "data_size": 63488 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "name": "pt2", 00:21:24.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.555 "is_configured": true, 00:21:24.555 "data_offset": 2048, 00:21:24.555 "data_size": 63488 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "name": "pt3", 00:21:24.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.556 "is_configured": true, 00:21:24.556 "data_offset": 2048, 00:21:24.556 "data_size": 63488 00:21:24.556 } 00:21:24.556 ] 00:21:24.556 } 00:21:24.556 } 00:21:24.556 }' 00:21:24.556 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.813 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:24.813 pt2 00:21:24.813 pt3' 00:21:24.813 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.813 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:24.813 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.070 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.070 "name": "pt1", 00:21:25.070 "aliases": [ 00:21:25.070 "00000000-0000-0000-0000-000000000001" 00:21:25.070 ], 00:21:25.070 "product_name": "passthru", 00:21:25.070 "block_size": 512, 00:21:25.070 "num_blocks": 65536, 00:21:25.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:25.070 "assigned_rate_limits": { 00:21:25.070 "rw_ios_per_sec": 0, 00:21:25.070 "rw_mbytes_per_sec": 0, 00:21:25.070 "r_mbytes_per_sec": 0, 00:21:25.070 "w_mbytes_per_sec": 0 00:21:25.071 }, 00:21:25.071 "claimed": true, 00:21:25.071 "claim_type": "exclusive_write", 00:21:25.071 "zoned": false, 00:21:25.071 "supported_io_types": { 00:21:25.071 "read": true, 00:21:25.071 "write": true, 00:21:25.071 "unmap": true, 00:21:25.071 "flush": true, 00:21:25.071 "reset": true, 00:21:25.071 "nvme_admin": false, 00:21:25.071 "nvme_io": false, 00:21:25.071 "nvme_io_md": false, 00:21:25.071 "write_zeroes": true, 00:21:25.071 "zcopy": true, 00:21:25.071 "get_zone_info": false, 00:21:25.071 "zone_management": false, 00:21:25.071 "zone_append": false, 00:21:25.071 "compare": false, 00:21:25.071 "compare_and_write": false, 00:21:25.071 "abort": true, 00:21:25.071 "seek_hole": false, 00:21:25.071 "seek_data": false, 00:21:25.071 "copy": true, 00:21:25.071 "nvme_iov_md": false 00:21:25.071 }, 00:21:25.071 "memory_domains": [ 00:21:25.071 { 00:21:25.071 "dma_device_id": "system", 00:21:25.071 "dma_device_type": 1 00:21:25.071 }, 00:21:25.071 { 00:21:25.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.071 "dma_device_type": 2 00:21:25.071 } 00:21:25.071 ], 00:21:25.071 "driver_specific": { 00:21:25.071 "passthru": { 00:21:25.071 "name": "pt1", 00:21:25.071 "base_bdev_name": "malloc1" 00:21:25.071 } 00:21:25.071 } 00:21:25.071 }' 00:21:25.071 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.071 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.071 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.071 14:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.071 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.071 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.071 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:25.329 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.587 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.587 "name": "pt2", 00:21:25.587 "aliases": [ 00:21:25.587 "00000000-0000-0000-0000-000000000002" 00:21:25.587 ], 00:21:25.587 "product_name": "passthru", 00:21:25.587 "block_size": 512, 00:21:25.587 "num_blocks": 65536, 00:21:25.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.587 "assigned_rate_limits": { 00:21:25.587 "rw_ios_per_sec": 0, 00:21:25.587 "rw_mbytes_per_sec": 0, 00:21:25.587 "r_mbytes_per_sec": 0, 00:21:25.587 "w_mbytes_per_sec": 0 00:21:25.587 }, 00:21:25.587 "claimed": true, 00:21:25.587 "claim_type": "exclusive_write", 00:21:25.587 "zoned": false, 00:21:25.587 "supported_io_types": { 00:21:25.587 "read": true, 00:21:25.587 "write": true, 00:21:25.587 "unmap": true, 00:21:25.587 "flush": true, 00:21:25.587 "reset": true, 00:21:25.587 "nvme_admin": false, 00:21:25.587 "nvme_io": false, 00:21:25.587 "nvme_io_md": false, 00:21:25.587 "write_zeroes": true, 00:21:25.587 "zcopy": true, 00:21:25.587 "get_zone_info": false, 00:21:25.587 "zone_management": false, 00:21:25.587 "zone_append": false, 00:21:25.587 "compare": false, 00:21:25.588 "compare_and_write": false, 00:21:25.588 "abort": true, 00:21:25.588 "seek_hole": false, 00:21:25.588 "seek_data": false, 00:21:25.588 "copy": true, 00:21:25.588 "nvme_iov_md": false 00:21:25.588 }, 00:21:25.588 "memory_domains": [ 00:21:25.588 { 00:21:25.588 "dma_device_id": "system", 00:21:25.588 "dma_device_type": 1 00:21:25.588 }, 00:21:25.588 { 00:21:25.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.588 "dma_device_type": 2 00:21:25.588 } 00:21:25.588 ], 00:21:25.588 "driver_specific": { 00:21:25.588 "passthru": { 00:21:25.588 "name": "pt2", 00:21:25.588 "base_bdev_name": "malloc2" 00:21:25.588 } 00:21:25.588 } 00:21:25.588 }' 00:21:25.588 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.588 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.588 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.588 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.846 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.104 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.104 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.104 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:26.104 14:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.363 "name": "pt3", 00:21:26.363 "aliases": [ 00:21:26.363 "00000000-0000-0000-0000-000000000003" 00:21:26.363 ], 00:21:26.363 "product_name": "passthru", 00:21:26.363 "block_size": 512, 00:21:26.363 "num_blocks": 65536, 00:21:26.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:26.363 "assigned_rate_limits": { 00:21:26.363 "rw_ios_per_sec": 0, 00:21:26.363 "rw_mbytes_per_sec": 0, 00:21:26.363 "r_mbytes_per_sec": 0, 00:21:26.363 "w_mbytes_per_sec": 0 00:21:26.363 }, 00:21:26.363 "claimed": true, 00:21:26.363 "claim_type": "exclusive_write", 00:21:26.363 "zoned": false, 00:21:26.363 "supported_io_types": { 00:21:26.363 "read": true, 00:21:26.363 "write": true, 00:21:26.363 "unmap": true, 00:21:26.363 "flush": true, 00:21:26.363 "reset": true, 00:21:26.363 "nvme_admin": false, 00:21:26.363 "nvme_io": false, 00:21:26.363 "nvme_io_md": false, 00:21:26.363 "write_zeroes": true, 00:21:26.363 "zcopy": true, 00:21:26.363 "get_zone_info": false, 00:21:26.363 "zone_management": false, 00:21:26.363 "zone_append": false, 00:21:26.363 "compare": false, 00:21:26.363 "compare_and_write": false, 00:21:26.363 "abort": true, 00:21:26.363 "seek_hole": false, 00:21:26.363 "seek_data": false, 00:21:26.363 "copy": true, 00:21:26.363 "nvme_iov_md": false 00:21:26.363 }, 00:21:26.363 "memory_domains": [ 00:21:26.363 { 00:21:26.363 "dma_device_id": "system", 00:21:26.363 "dma_device_type": 1 00:21:26.363 }, 00:21:26.363 { 00:21:26.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.363 "dma_device_type": 2 00:21:26.363 } 00:21:26.363 ], 00:21:26.363 "driver_specific": { 00:21:26.363 "passthru": { 00:21:26.363 "name": "pt3", 00:21:26.363 "base_bdev_name": "malloc3" 00:21:26.363 } 00:21:26.363 } 00:21:26.363 }' 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.363 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:26.622 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:26.881 [2024-07-25 14:04:15.827178] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 4daf27b4-1a1d-410f-a467-6b8dc8263095 '!=' 4daf27b4-1a1d-410f-a467-6b8dc8263095 ']' 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 129993 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 129993 ']' 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 129993 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129993 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129993' 00:21:26.881 killing process with pid 129993 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 129993 00:21:26.881 [2024-07-25 14:04:15.873739] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:26.881 14:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 129993 00:21:26.881 [2024-07-25 14:04:15.873926] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.881 [2024-07-25 14:04:15.874122] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:26.881 [2024-07-25 14:04:15.874173] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:21:27.140 [2024-07-25 14:04:16.123744] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.521 ************************************ 00:21:28.521 END TEST raid_superblock_test 00:21:28.521 ************************************ 00:21:28.521 14:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:28.521 00:21:28.521 real 0m16.742s 00:21:28.521 user 0m30.043s 00:21:28.521 sys 0m2.043s 00:21:28.521 14:04:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.521 14:04:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.521 14:04:17 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:21:28.521 14:04:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:28.521 14:04:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:28.521 14:04:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.521 ************************************ 00:21:28.521 START TEST raid_read_error_test 00:21:28.521 ************************************ 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.10SQByyqyX 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=130501 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 130501 /var/tmp/spdk-raid.sock 00:21:28.521 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 130501 ']' 00:21:28.522 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:28.522 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.522 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:28.522 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.522 14:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.522 [2024-07-25 14:04:17.341527] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:28.522 [2024-07-25 14:04:17.342147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130501 ] 00:21:28.522 [2024-07-25 14:04:17.511957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.780 [2024-07-25 14:04:17.733478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.038 [2024-07-25 14:04:17.926360] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.297 14:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.297 14:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:29.297 14:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:29.297 14:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:29.555 BaseBdev1_malloc 00:21:29.813 14:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:30.071 true 00:21:30.071 14:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:30.329 [2024-07-25 14:04:19.132996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:30.329 [2024-07-25 14:04:19.133400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.329 [2024-07-25 14:04:19.133616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:30.329 [2024-07-25 14:04:19.133783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.329 [2024-07-25 14:04:19.136575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.329 [2024-07-25 14:04:19.136747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:30.329 BaseBdev1 00:21:30.329 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:30.329 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:30.587 BaseBdev2_malloc 00:21:30.587 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:30.845 true 00:21:30.845 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:31.103 [2024-07-25 14:04:19.948582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:31.103 [2024-07-25 14:04:19.949003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.103 [2024-07-25 14:04:19.949214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:31.103 [2024-07-25 14:04:19.949353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.103 [2024-07-25 14:04:19.951998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.103 [2024-07-25 14:04:19.952187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:31.103 BaseBdev2 00:21:31.103 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:31.103 14:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:31.361 BaseBdev3_malloc 00:21:31.361 14:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:31.619 true 00:21:31.619 14:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:31.909 [2024-07-25 14:04:20.787173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:31.909 [2024-07-25 14:04:20.787548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.909 [2024-07-25 14:04:20.787761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:31.909 [2024-07-25 14:04:20.787907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.909 [2024-07-25 14:04:20.790626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.909 [2024-07-25 14:04:20.790829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:31.909 BaseBdev3 00:21:31.909 14:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:32.167 [2024-07-25 14:04:21.055428] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.167 [2024-07-25 14:04:21.058115] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.167 [2024-07-25 14:04:21.058419] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.167 [2024-07-25 14:04:21.058821] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:32.167 [2024-07-25 14:04:21.058984] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:32.167 [2024-07-25 14:04:21.059188] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:32.167 [2024-07-25 14:04:21.059756] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:32.167 [2024-07-25 14:04:21.059899] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:21:32.167 [2024-07-25 14:04:21.060303] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.167 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:32.167 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:32.167 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:32.167 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:32.167 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.168 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.426 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.426 "name": "raid_bdev1", 00:21:32.426 "uuid": "252be4f3-a418-4b2f-9c2c-a1bd097e2b2a", 00:21:32.426 "strip_size_kb": 64, 00:21:32.426 "state": "online", 00:21:32.426 "raid_level": "concat", 00:21:32.426 "superblock": true, 00:21:32.426 "num_base_bdevs": 3, 00:21:32.426 "num_base_bdevs_discovered": 3, 00:21:32.426 "num_base_bdevs_operational": 3, 00:21:32.426 "base_bdevs_list": [ 00:21:32.426 { 00:21:32.426 "name": "BaseBdev1", 00:21:32.426 "uuid": "dfc47a0d-6394-5da3-892e-a1b239cf1c29", 00:21:32.426 "is_configured": true, 00:21:32.426 "data_offset": 2048, 00:21:32.426 "data_size": 63488 00:21:32.426 }, 00:21:32.426 { 00:21:32.426 "name": "BaseBdev2", 00:21:32.426 "uuid": "34891ca5-bd08-56a8-9566-cb2cd41c1575", 00:21:32.426 "is_configured": true, 00:21:32.426 "data_offset": 2048, 00:21:32.426 "data_size": 63488 00:21:32.426 }, 00:21:32.426 { 00:21:32.426 "name": "BaseBdev3", 00:21:32.426 "uuid": "02aef134-6288-55f3-bcac-59d19aab3b8c", 00:21:32.426 "is_configured": true, 00:21:32.426 "data_offset": 2048, 00:21:32.426 "data_size": 63488 00:21:32.426 } 00:21:32.426 ] 00:21:32.426 }' 00:21:32.426 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.426 14:04:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.992 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:21:32.992 14:04:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:33.249 [2024-07-25 14:04:22.053779] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:34.181 14:04:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=3 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.437 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.695 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.695 "name": "raid_bdev1", 00:21:34.695 "uuid": "252be4f3-a418-4b2f-9c2c-a1bd097e2b2a", 00:21:34.695 "strip_size_kb": 64, 00:21:34.695 "state": "online", 00:21:34.695 "raid_level": "concat", 00:21:34.695 "superblock": true, 00:21:34.695 "num_base_bdevs": 3, 00:21:34.695 "num_base_bdevs_discovered": 3, 00:21:34.695 "num_base_bdevs_operational": 3, 00:21:34.695 "base_bdevs_list": [ 00:21:34.695 { 00:21:34.695 "name": "BaseBdev1", 00:21:34.695 "uuid": "dfc47a0d-6394-5da3-892e-a1b239cf1c29", 00:21:34.695 "is_configured": true, 00:21:34.695 "data_offset": 2048, 00:21:34.695 "data_size": 63488 00:21:34.695 }, 00:21:34.695 { 00:21:34.695 "name": "BaseBdev2", 00:21:34.695 "uuid": "34891ca5-bd08-56a8-9566-cb2cd41c1575", 00:21:34.695 "is_configured": true, 00:21:34.695 "data_offset": 2048, 00:21:34.695 "data_size": 63488 00:21:34.695 }, 00:21:34.695 { 00:21:34.695 "name": "BaseBdev3", 00:21:34.695 "uuid": "02aef134-6288-55f3-bcac-59d19aab3b8c", 00:21:34.695 "is_configured": true, 00:21:34.695 "data_offset": 2048, 00:21:34.695 "data_size": 63488 00:21:34.695 } 00:21:34.695 ] 00:21:34.695 }' 00:21:34.695 14:04:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.695 14:04:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.261 14:04:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.519 [2024-07-25 14:04:24.530858] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.519 [2024-07-25 14:04:24.531175] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.519 [2024-07-25 14:04:24.534386] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.519 [2024-07-25 14:04:24.534593] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.519 [2024-07-25 14:04:24.534677] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.519 [2024-07-25 14:04:24.534888] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:21:35.519 0 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 130501 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 130501 ']' 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 130501 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.519 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130501 00:21:35.783 killing process with pid 130501 00:21:35.783 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:35.783 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:35.783 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130501' 00:21:35.783 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 130501 00:21:35.783 14:04:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 130501 00:21:35.783 [2024-07-25 14:04:24.576064] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.783 [2024-07-25 14:04:24.768608] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.10SQByyqyX 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.40 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.40 != \0\.\0\0 ]] 00:21:37.157 00:21:37.157 real 0m8.696s 00:21:37.157 user 0m13.515s 00:21:37.157 sys 0m0.962s 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.157 14:04:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.157 ************************************ 00:21:37.157 END TEST raid_read_error_test 00:21:37.157 ************************************ 00:21:37.157 14:04:25 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:21:37.157 14:04:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:37.157 14:04:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.157 14:04:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.157 ************************************ 00:21:37.157 START TEST raid_write_error_test 00:21:37.157 ************************************ 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.TdqEAYIvAm 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=130706 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 130706 /var/tmp/spdk-raid.sock 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 130706 ']' 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:37.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.157 14:04:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.157 [2024-07-25 14:04:26.084631] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:37.157 [2024-07-25 14:04:26.084992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130706 ] 00:21:37.415 [2024-07-25 14:04:26.244603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.674 [2024-07-25 14:04:26.459981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.674 [2024-07-25 14:04:26.661100] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.239 14:04:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.239 14:04:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:38.239 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:38.239 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:38.497 BaseBdev1_malloc 00:21:38.497 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:38.755 true 00:21:38.755 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:38.755 [2024-07-25 14:04:27.786592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:38.755 [2024-07-25 14:04:27.787048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.755 [2024-07-25 14:04:27.787228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:38.755 [2024-07-25 14:04:27.787382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.755 [2024-07-25 14:04:27.790272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.755 [2024-07-25 14:04:27.790455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:38.755 BaseBdev1 00:21:39.012 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:39.012 14:04:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:39.270 BaseBdev2_malloc 00:21:39.270 14:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:39.528 true 00:21:39.528 14:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:39.786 [2024-07-25 14:04:28.769368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:39.786 [2024-07-25 14:04:28.769823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.786 [2024-07-25 14:04:28.769997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:39.786 [2024-07-25 14:04:28.770122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.786 [2024-07-25 14:04:28.772796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.786 [2024-07-25 14:04:28.772967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:39.786 BaseBdev2 00:21:39.786 14:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:21:39.786 14:04:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:40.044 BaseBdev3_malloc 00:21:40.044 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:40.302 true 00:21:40.302 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:40.559 [2024-07-25 14:04:29.556422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:40.559 [2024-07-25 14:04:29.556786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.559 [2024-07-25 14:04:29.556950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:40.559 [2024-07-25 14:04:29.557079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.559 [2024-07-25 14:04:29.559864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.560 [2024-07-25 14:04:29.560046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:40.560 BaseBdev3 00:21:40.560 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:40.817 [2024-07-25 14:04:29.792664] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.817 [2024-07-25 14:04:29.795185] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.817 [2024-07-25 14:04:29.795474] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.817 [2024-07-25 14:04:29.795869] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:40.817 [2024-07-25 14:04:29.796026] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:40.817 [2024-07-25 14:04:29.796225] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:40.817 [2024-07-25 14:04:29.796699] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:40.817 [2024-07-25 14:04:29.796824] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:21:40.817 [2024-07-25 14:04:29.797228] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:40.817 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.818 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.818 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.818 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.818 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.818 14:04:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.076 14:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.076 "name": "raid_bdev1", 00:21:41.076 "uuid": "85d2b7fc-0d17-4ea6-ae1f-9843ef0af1b8", 00:21:41.076 "strip_size_kb": 64, 00:21:41.076 "state": "online", 00:21:41.076 "raid_level": "concat", 00:21:41.076 "superblock": true, 00:21:41.076 "num_base_bdevs": 3, 00:21:41.076 "num_base_bdevs_discovered": 3, 00:21:41.076 "num_base_bdevs_operational": 3, 00:21:41.076 "base_bdevs_list": [ 00:21:41.076 { 00:21:41.076 "name": "BaseBdev1", 00:21:41.076 "uuid": "f66be9f1-c113-58fd-af3c-04d82c8cbc00", 00:21:41.076 "is_configured": true, 00:21:41.076 "data_offset": 2048, 00:21:41.076 "data_size": 63488 00:21:41.076 }, 00:21:41.076 { 00:21:41.076 "name": "BaseBdev2", 00:21:41.076 "uuid": "2a8777e2-7949-53fb-9487-1f4164ddebe4", 00:21:41.076 "is_configured": true, 00:21:41.076 "data_offset": 2048, 00:21:41.076 "data_size": 63488 00:21:41.076 }, 00:21:41.076 { 00:21:41.076 "name": "BaseBdev3", 00:21:41.076 "uuid": "9a381994-604d-56fb-b52a-5bb7696a504f", 00:21:41.076 "is_configured": true, 00:21:41.076 "data_offset": 2048, 00:21:41.076 "data_size": 63488 00:21:41.076 } 00:21:41.076 ] 00:21:41.076 }' 00:21:41.076 14:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.076 14:04:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.010 14:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:42.010 14:04:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:21:42.010 [2024-07-25 14:04:30.834756] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:42.944 14:04:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=3 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.203 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.480 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.480 "name": "raid_bdev1", 00:21:43.480 "uuid": "85d2b7fc-0d17-4ea6-ae1f-9843ef0af1b8", 00:21:43.480 "strip_size_kb": 64, 00:21:43.480 "state": "online", 00:21:43.480 "raid_level": "concat", 00:21:43.480 "superblock": true, 00:21:43.480 "num_base_bdevs": 3, 00:21:43.480 "num_base_bdevs_discovered": 3, 00:21:43.480 "num_base_bdevs_operational": 3, 00:21:43.480 "base_bdevs_list": [ 00:21:43.480 { 00:21:43.480 "name": "BaseBdev1", 00:21:43.480 "uuid": "f66be9f1-c113-58fd-af3c-04d82c8cbc00", 00:21:43.480 "is_configured": true, 00:21:43.480 "data_offset": 2048, 00:21:43.480 "data_size": 63488 00:21:43.480 }, 00:21:43.480 { 00:21:43.480 "name": "BaseBdev2", 00:21:43.480 "uuid": "2a8777e2-7949-53fb-9487-1f4164ddebe4", 00:21:43.480 "is_configured": true, 00:21:43.480 "data_offset": 2048, 00:21:43.480 "data_size": 63488 00:21:43.480 }, 00:21:43.480 { 00:21:43.480 "name": "BaseBdev3", 00:21:43.480 "uuid": "9a381994-604d-56fb-b52a-5bb7696a504f", 00:21:43.480 "is_configured": true, 00:21:43.480 "data_offset": 2048, 00:21:43.480 "data_size": 63488 00:21:43.480 } 00:21:43.480 ] 00:21:43.480 }' 00:21:43.480 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.480 14:04:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.052 14:04:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:44.309 [2024-07-25 14:04:33.182694] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.309 [2024-07-25 14:04:33.183036] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.309 [2024-07-25 14:04:33.185990] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.309 [2024-07-25 14:04:33.186251] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.309 [2024-07-25 14:04:33.186450] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.309 [2024-07-25 14:04:33.186560] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:21:44.309 0 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 130706 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 130706 ']' 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 130706 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130706 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130706' 00:21:44.309 killing process with pid 130706 00:21:44.309 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 130706 00:21:44.310 14:04:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 130706 00:21:44.310 [2024-07-25 14:04:33.227766] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.566 [2024-07-25 14:04:33.421657] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.TdqEAYIvAm 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:21:45.939 ************************************ 00:21:45.939 END TEST raid_write_error_test 00:21:45.939 ************************************ 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.43 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.43 != \0\.\0\0 ]] 00:21:45.939 00:21:45.939 real 0m8.597s 00:21:45.939 user 0m13.318s 00:21:45.939 sys 0m0.971s 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.939 14:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.939 14:04:34 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:21:45.939 14:04:34 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:45.939 14:04:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:45.939 14:04:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.939 14:04:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.939 ************************************ 00:21:45.939 START TEST raid_state_function_test 00:21:45.939 ************************************ 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:45.939 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:45.940 Process raid pid: 130918 00:21:45.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=130918 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130918' 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 130918 /var/tmp/spdk-raid.sock 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 130918 ']' 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.940 14:04:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.940 [2024-07-25 14:04:34.729634] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:21:45.940 [2024-07-25 14:04:34.729881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.940 [2024-07-25 14:04:34.895147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.198 [2024-07-25 14:04:35.110329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.457 [2024-07-25 14:04:35.312680] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.750 14:04:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.750 14:04:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:46.750 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:47.031 [2024-07-25 14:04:35.949213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:47.031 [2024-07-25 14:04:35.949387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:47.031 [2024-07-25 14:04:35.949411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:47.031 [2024-07-25 14:04:35.949455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:47.031 [2024-07-25 14:04:35.949484] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:47.031 [2024-07-25 14:04:35.949515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.031 14:04:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.291 14:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.291 "name": "Existed_Raid", 00:21:47.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.291 "strip_size_kb": 0, 00:21:47.291 "state": "configuring", 00:21:47.291 "raid_level": "raid1", 00:21:47.291 "superblock": false, 00:21:47.291 "num_base_bdevs": 3, 00:21:47.291 "num_base_bdevs_discovered": 0, 00:21:47.291 "num_base_bdevs_operational": 3, 00:21:47.291 "base_bdevs_list": [ 00:21:47.291 { 00:21:47.291 "name": "BaseBdev1", 00:21:47.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.291 "is_configured": false, 00:21:47.291 "data_offset": 0, 00:21:47.291 "data_size": 0 00:21:47.291 }, 00:21:47.291 { 00:21:47.291 "name": "BaseBdev2", 00:21:47.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.291 "is_configured": false, 00:21:47.291 "data_offset": 0, 00:21:47.291 "data_size": 0 00:21:47.291 }, 00:21:47.291 { 00:21:47.291 "name": "BaseBdev3", 00:21:47.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.291 "is_configured": false, 00:21:47.291 "data_offset": 0, 00:21:47.291 "data_size": 0 00:21:47.291 } 00:21:47.291 ] 00:21:47.291 }' 00:21:47.291 14:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.291 14:04:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.223 14:04:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:48.223 [2024-07-25 14:04:37.149272] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:48.223 [2024-07-25 14:04:37.149328] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:21:48.223 14:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:48.481 [2024-07-25 14:04:37.385335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:48.481 [2024-07-25 14:04:37.385471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:48.481 [2024-07-25 14:04:37.385487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.481 [2024-07-25 14:04:37.385509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.481 [2024-07-25 14:04:37.385518] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:48.481 [2024-07-25 14:04:37.385545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:48.481 14:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:48.739 [2024-07-25 14:04:37.714522] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.739 BaseBdev1 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:48.739 14:04:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.997 14:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.255 [ 00:21:49.255 { 00:21:49.255 "name": "BaseBdev1", 00:21:49.255 "aliases": [ 00:21:49.255 "e92eb728-f9b4-4442-92e0-4827346336d4" 00:21:49.255 ], 00:21:49.255 "product_name": "Malloc disk", 00:21:49.255 "block_size": 512, 00:21:49.255 "num_blocks": 65536, 00:21:49.255 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:49.255 "assigned_rate_limits": { 00:21:49.255 "rw_ios_per_sec": 0, 00:21:49.255 "rw_mbytes_per_sec": 0, 00:21:49.255 "r_mbytes_per_sec": 0, 00:21:49.255 "w_mbytes_per_sec": 0 00:21:49.255 }, 00:21:49.255 "claimed": true, 00:21:49.255 "claim_type": "exclusive_write", 00:21:49.255 "zoned": false, 00:21:49.255 "supported_io_types": { 00:21:49.255 "read": true, 00:21:49.255 "write": true, 00:21:49.255 "unmap": true, 00:21:49.255 "flush": true, 00:21:49.255 "reset": true, 00:21:49.255 "nvme_admin": false, 00:21:49.255 "nvme_io": false, 00:21:49.255 "nvme_io_md": false, 00:21:49.255 "write_zeroes": true, 00:21:49.255 "zcopy": true, 00:21:49.255 "get_zone_info": false, 00:21:49.255 "zone_management": false, 00:21:49.255 "zone_append": false, 00:21:49.255 "compare": false, 00:21:49.255 "compare_and_write": false, 00:21:49.255 "abort": true, 00:21:49.255 "seek_hole": false, 00:21:49.255 "seek_data": false, 00:21:49.255 "copy": true, 00:21:49.255 "nvme_iov_md": false 00:21:49.255 }, 00:21:49.255 "memory_domains": [ 00:21:49.255 { 00:21:49.255 "dma_device_id": "system", 00:21:49.255 "dma_device_type": 1 00:21:49.255 }, 00:21:49.255 { 00:21:49.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.255 "dma_device_type": 2 00:21:49.255 } 00:21:49.255 ], 00:21:49.255 "driver_specific": {} 00:21:49.255 } 00:21:49.255 ] 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.255 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.513 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.513 "name": "Existed_Raid", 00:21:49.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.513 "strip_size_kb": 0, 00:21:49.513 "state": "configuring", 00:21:49.513 "raid_level": "raid1", 00:21:49.513 "superblock": false, 00:21:49.513 "num_base_bdevs": 3, 00:21:49.513 "num_base_bdevs_discovered": 1, 00:21:49.513 "num_base_bdevs_operational": 3, 00:21:49.513 "base_bdevs_list": [ 00:21:49.513 { 00:21:49.513 "name": "BaseBdev1", 00:21:49.513 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:49.513 "is_configured": true, 00:21:49.513 "data_offset": 0, 00:21:49.513 "data_size": 65536 00:21:49.513 }, 00:21:49.513 { 00:21:49.513 "name": "BaseBdev2", 00:21:49.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.513 "is_configured": false, 00:21:49.513 "data_offset": 0, 00:21:49.513 "data_size": 0 00:21:49.513 }, 00:21:49.513 { 00:21:49.513 "name": "BaseBdev3", 00:21:49.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.513 "is_configured": false, 00:21:49.513 "data_offset": 0, 00:21:49.513 "data_size": 0 00:21:49.513 } 00:21:49.513 ] 00:21:49.513 }' 00:21:49.513 14:04:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.513 14:04:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.445 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:50.445 [2024-07-25 14:04:39.479042] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:50.445 [2024-07-25 14:04:39.479138] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:50.703 [2024-07-25 14:04:39.719083] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.703 [2024-07-25 14:04:39.721271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:50.703 [2024-07-25 14:04:39.721363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:50.703 [2024-07-25 14:04:39.721378] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:50.703 [2024-07-25 14:04:39.721434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.703 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.962 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.962 "name": "Existed_Raid", 00:21:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.962 "strip_size_kb": 0, 00:21:50.962 "state": "configuring", 00:21:50.962 "raid_level": "raid1", 00:21:50.962 "superblock": false, 00:21:50.962 "num_base_bdevs": 3, 00:21:50.962 "num_base_bdevs_discovered": 1, 00:21:50.962 "num_base_bdevs_operational": 3, 00:21:50.962 "base_bdevs_list": [ 00:21:50.962 { 00:21:50.962 "name": "BaseBdev1", 00:21:50.962 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:50.962 "is_configured": true, 00:21:50.962 "data_offset": 0, 00:21:50.962 "data_size": 65536 00:21:50.962 }, 00:21:50.962 { 00:21:50.962 "name": "BaseBdev2", 00:21:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.962 "is_configured": false, 00:21:50.962 "data_offset": 0, 00:21:50.962 "data_size": 0 00:21:50.962 }, 00:21:50.962 { 00:21:50.962 "name": "BaseBdev3", 00:21:50.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.962 "is_configured": false, 00:21:50.962 "data_offset": 0, 00:21:50.962 "data_size": 0 00:21:50.962 } 00:21:50.962 ] 00:21:50.962 }' 00:21:50.962 14:04:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.962 14:04:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.896 14:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:51.896 [2024-07-25 14:04:40.917152] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.896 BaseBdev2 00:21:51.896 14:04:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:51.896 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:52.154 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:52.154 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:52.154 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:52.154 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:52.154 14:04:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.154 14:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:52.411 [ 00:21:52.411 { 00:21:52.411 "name": "BaseBdev2", 00:21:52.411 "aliases": [ 00:21:52.411 "f942fd16-109d-43e6-904c-308a0d334ede" 00:21:52.411 ], 00:21:52.411 "product_name": "Malloc disk", 00:21:52.411 "block_size": 512, 00:21:52.411 "num_blocks": 65536, 00:21:52.411 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:52.411 "assigned_rate_limits": { 00:21:52.411 "rw_ios_per_sec": 0, 00:21:52.411 "rw_mbytes_per_sec": 0, 00:21:52.411 "r_mbytes_per_sec": 0, 00:21:52.411 "w_mbytes_per_sec": 0 00:21:52.411 }, 00:21:52.411 "claimed": true, 00:21:52.411 "claim_type": "exclusive_write", 00:21:52.411 "zoned": false, 00:21:52.411 "supported_io_types": { 00:21:52.411 "read": true, 00:21:52.411 "write": true, 00:21:52.411 "unmap": true, 00:21:52.411 "flush": true, 00:21:52.411 "reset": true, 00:21:52.411 "nvme_admin": false, 00:21:52.411 "nvme_io": false, 00:21:52.411 "nvme_io_md": false, 00:21:52.411 "write_zeroes": true, 00:21:52.411 "zcopy": true, 00:21:52.411 "get_zone_info": false, 00:21:52.411 "zone_management": false, 00:21:52.411 "zone_append": false, 00:21:52.411 "compare": false, 00:21:52.411 "compare_and_write": false, 00:21:52.411 "abort": true, 00:21:52.411 "seek_hole": false, 00:21:52.411 "seek_data": false, 00:21:52.411 "copy": true, 00:21:52.411 "nvme_iov_md": false 00:21:52.411 }, 00:21:52.411 "memory_domains": [ 00:21:52.411 { 00:21:52.411 "dma_device_id": "system", 00:21:52.411 "dma_device_type": 1 00:21:52.411 }, 00:21:52.411 { 00:21:52.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.411 "dma_device_type": 2 00:21:52.411 } 00:21:52.411 ], 00:21:52.411 "driver_specific": {} 00:21:52.411 } 00:21:52.411 ] 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.411 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.669 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.669 "name": "Existed_Raid", 00:21:52.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.669 "strip_size_kb": 0, 00:21:52.669 "state": "configuring", 00:21:52.669 "raid_level": "raid1", 00:21:52.669 "superblock": false, 00:21:52.669 "num_base_bdevs": 3, 00:21:52.669 "num_base_bdevs_discovered": 2, 00:21:52.669 "num_base_bdevs_operational": 3, 00:21:52.669 "base_bdevs_list": [ 00:21:52.669 { 00:21:52.669 "name": "BaseBdev1", 00:21:52.669 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:52.669 "is_configured": true, 00:21:52.669 "data_offset": 0, 00:21:52.669 "data_size": 65536 00:21:52.669 }, 00:21:52.669 { 00:21:52.669 "name": "BaseBdev2", 00:21:52.669 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:52.669 "is_configured": true, 00:21:52.669 "data_offset": 0, 00:21:52.669 "data_size": 65536 00:21:52.669 }, 00:21:52.669 { 00:21:52.669 "name": "BaseBdev3", 00:21:52.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.669 "is_configured": false, 00:21:52.669 "data_offset": 0, 00:21:52.669 "data_size": 0 00:21:52.669 } 00:21:52.669 ] 00:21:52.669 }' 00:21:52.669 14:04:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.669 14:04:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.601 14:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:53.602 [2024-07-25 14:04:42.551366] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:53.602 [2024-07-25 14:04:42.551453] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:21:53.602 [2024-07-25 14:04:42.551466] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:53.602 [2024-07-25 14:04:42.551592] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:53.602 [2024-07-25 14:04:42.552005] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:21:53.602 [2024-07-25 14:04:42.552030] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:21:53.602 [2024-07-25 14:04:42.552285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.602 BaseBdev3 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:53.602 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.878 14:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:54.136 [ 00:21:54.136 { 00:21:54.136 "name": "BaseBdev3", 00:21:54.136 "aliases": [ 00:21:54.136 "eaa04faf-865e-408c-9b11-ef993ab3ff28" 00:21:54.136 ], 00:21:54.136 "product_name": "Malloc disk", 00:21:54.136 "block_size": 512, 00:21:54.136 "num_blocks": 65536, 00:21:54.136 "uuid": "eaa04faf-865e-408c-9b11-ef993ab3ff28", 00:21:54.136 "assigned_rate_limits": { 00:21:54.136 "rw_ios_per_sec": 0, 00:21:54.136 "rw_mbytes_per_sec": 0, 00:21:54.136 "r_mbytes_per_sec": 0, 00:21:54.136 "w_mbytes_per_sec": 0 00:21:54.136 }, 00:21:54.136 "claimed": true, 00:21:54.136 "claim_type": "exclusive_write", 00:21:54.136 "zoned": false, 00:21:54.136 "supported_io_types": { 00:21:54.136 "read": true, 00:21:54.136 "write": true, 00:21:54.136 "unmap": true, 00:21:54.136 "flush": true, 00:21:54.136 "reset": true, 00:21:54.136 "nvme_admin": false, 00:21:54.136 "nvme_io": false, 00:21:54.136 "nvme_io_md": false, 00:21:54.136 "write_zeroes": true, 00:21:54.136 "zcopy": true, 00:21:54.136 "get_zone_info": false, 00:21:54.136 "zone_management": false, 00:21:54.137 "zone_append": false, 00:21:54.137 "compare": false, 00:21:54.137 "compare_and_write": false, 00:21:54.137 "abort": true, 00:21:54.137 "seek_hole": false, 00:21:54.137 "seek_data": false, 00:21:54.137 "copy": true, 00:21:54.137 "nvme_iov_md": false 00:21:54.137 }, 00:21:54.137 "memory_domains": [ 00:21:54.137 { 00:21:54.137 "dma_device_id": "system", 00:21:54.137 "dma_device_type": 1 00:21:54.137 }, 00:21:54.137 { 00:21:54.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.137 "dma_device_type": 2 00:21:54.137 } 00:21:54.137 ], 00:21:54.137 "driver_specific": {} 00:21:54.137 } 00:21:54.137 ] 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.137 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.396 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.396 "name": "Existed_Raid", 00:21:54.396 "uuid": "62b996b9-cbc4-46fe-ac0a-342551629a7e", 00:21:54.396 "strip_size_kb": 0, 00:21:54.396 "state": "online", 00:21:54.396 "raid_level": "raid1", 00:21:54.396 "superblock": false, 00:21:54.396 "num_base_bdevs": 3, 00:21:54.396 "num_base_bdevs_discovered": 3, 00:21:54.396 "num_base_bdevs_operational": 3, 00:21:54.396 "base_bdevs_list": [ 00:21:54.396 { 00:21:54.396 "name": "BaseBdev1", 00:21:54.396 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:54.396 "is_configured": true, 00:21:54.396 "data_offset": 0, 00:21:54.396 "data_size": 65536 00:21:54.396 }, 00:21:54.396 { 00:21:54.396 "name": "BaseBdev2", 00:21:54.396 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:54.396 "is_configured": true, 00:21:54.396 "data_offset": 0, 00:21:54.396 "data_size": 65536 00:21:54.396 }, 00:21:54.396 { 00:21:54.396 "name": "BaseBdev3", 00:21:54.396 "uuid": "eaa04faf-865e-408c-9b11-ef993ab3ff28", 00:21:54.396 "is_configured": true, 00:21:54.396 "data_offset": 0, 00:21:54.396 "data_size": 65536 00:21:54.396 } 00:21:54.396 ] 00:21:54.396 }' 00:21:54.396 14:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.396 14:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:55.332 [2024-07-25 14:04:44.294874] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.332 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:55.332 "name": "Existed_Raid", 00:21:55.332 "aliases": [ 00:21:55.332 "62b996b9-cbc4-46fe-ac0a-342551629a7e" 00:21:55.332 ], 00:21:55.332 "product_name": "Raid Volume", 00:21:55.332 "block_size": 512, 00:21:55.332 "num_blocks": 65536, 00:21:55.332 "uuid": "62b996b9-cbc4-46fe-ac0a-342551629a7e", 00:21:55.332 "assigned_rate_limits": { 00:21:55.333 "rw_ios_per_sec": 0, 00:21:55.333 "rw_mbytes_per_sec": 0, 00:21:55.333 "r_mbytes_per_sec": 0, 00:21:55.333 "w_mbytes_per_sec": 0 00:21:55.333 }, 00:21:55.333 "claimed": false, 00:21:55.333 "zoned": false, 00:21:55.333 "supported_io_types": { 00:21:55.333 "read": true, 00:21:55.333 "write": true, 00:21:55.333 "unmap": false, 00:21:55.333 "flush": false, 00:21:55.333 "reset": true, 00:21:55.333 "nvme_admin": false, 00:21:55.333 "nvme_io": false, 00:21:55.333 "nvme_io_md": false, 00:21:55.333 "write_zeroes": true, 00:21:55.333 "zcopy": false, 00:21:55.333 "get_zone_info": false, 00:21:55.333 "zone_management": false, 00:21:55.333 "zone_append": false, 00:21:55.333 "compare": false, 00:21:55.333 "compare_and_write": false, 00:21:55.333 "abort": false, 00:21:55.333 "seek_hole": false, 00:21:55.333 "seek_data": false, 00:21:55.333 "copy": false, 00:21:55.333 "nvme_iov_md": false 00:21:55.333 }, 00:21:55.333 "memory_domains": [ 00:21:55.333 { 00:21:55.333 "dma_device_id": "system", 00:21:55.333 "dma_device_type": 1 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.333 "dma_device_type": 2 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "dma_device_id": "system", 00:21:55.333 "dma_device_type": 1 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.333 "dma_device_type": 2 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "dma_device_id": "system", 00:21:55.333 "dma_device_type": 1 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.333 "dma_device_type": 2 00:21:55.333 } 00:21:55.333 ], 00:21:55.333 "driver_specific": { 00:21:55.333 "raid": { 00:21:55.333 "uuid": "62b996b9-cbc4-46fe-ac0a-342551629a7e", 00:21:55.333 "strip_size_kb": 0, 00:21:55.333 "state": "online", 00:21:55.333 "raid_level": "raid1", 00:21:55.333 "superblock": false, 00:21:55.333 "num_base_bdevs": 3, 00:21:55.333 "num_base_bdevs_discovered": 3, 00:21:55.333 "num_base_bdevs_operational": 3, 00:21:55.333 "base_bdevs_list": [ 00:21:55.333 { 00:21:55.333 "name": "BaseBdev1", 00:21:55.333 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:55.333 "is_configured": true, 00:21:55.333 "data_offset": 0, 00:21:55.333 "data_size": 65536 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "name": "BaseBdev2", 00:21:55.333 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:55.333 "is_configured": true, 00:21:55.333 "data_offset": 0, 00:21:55.333 "data_size": 65536 00:21:55.333 }, 00:21:55.333 { 00:21:55.333 "name": "BaseBdev3", 00:21:55.333 "uuid": "eaa04faf-865e-408c-9b11-ef993ab3ff28", 00:21:55.333 "is_configured": true, 00:21:55.333 "data_offset": 0, 00:21:55.333 "data_size": 65536 00:21:55.333 } 00:21:55.333 ] 00:21:55.333 } 00:21:55.333 } 00:21:55.333 }' 00:21:55.333 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.333 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:55.333 BaseBdev2 00:21:55.333 BaseBdev3' 00:21:55.333 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:55.333 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:55.333 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:55.591 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:55.591 "name": "BaseBdev1", 00:21:55.591 "aliases": [ 00:21:55.591 "e92eb728-f9b4-4442-92e0-4827346336d4" 00:21:55.591 ], 00:21:55.591 "product_name": "Malloc disk", 00:21:55.591 "block_size": 512, 00:21:55.591 "num_blocks": 65536, 00:21:55.591 "uuid": "e92eb728-f9b4-4442-92e0-4827346336d4", 00:21:55.591 "assigned_rate_limits": { 00:21:55.591 "rw_ios_per_sec": 0, 00:21:55.591 "rw_mbytes_per_sec": 0, 00:21:55.591 "r_mbytes_per_sec": 0, 00:21:55.591 "w_mbytes_per_sec": 0 00:21:55.591 }, 00:21:55.591 "claimed": true, 00:21:55.591 "claim_type": "exclusive_write", 00:21:55.591 "zoned": false, 00:21:55.591 "supported_io_types": { 00:21:55.591 "read": true, 00:21:55.591 "write": true, 00:21:55.591 "unmap": true, 00:21:55.591 "flush": true, 00:21:55.591 "reset": true, 00:21:55.591 "nvme_admin": false, 00:21:55.591 "nvme_io": false, 00:21:55.591 "nvme_io_md": false, 00:21:55.591 "write_zeroes": true, 00:21:55.591 "zcopy": true, 00:21:55.591 "get_zone_info": false, 00:21:55.591 "zone_management": false, 00:21:55.591 "zone_append": false, 00:21:55.591 "compare": false, 00:21:55.591 "compare_and_write": false, 00:21:55.591 "abort": true, 00:21:55.591 "seek_hole": false, 00:21:55.591 "seek_data": false, 00:21:55.591 "copy": true, 00:21:55.591 "nvme_iov_md": false 00:21:55.591 }, 00:21:55.591 "memory_domains": [ 00:21:55.591 { 00:21:55.591 "dma_device_id": "system", 00:21:55.591 "dma_device_type": 1 00:21:55.591 }, 00:21:55.591 { 00:21:55.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.591 "dma_device_type": 2 00:21:55.591 } 00:21:55.591 ], 00:21:55.591 "driver_specific": {} 00:21:55.591 }' 00:21:55.591 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:55.849 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.107 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.108 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.108 14:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.108 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.108 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.108 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:56.108 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.366 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.366 "name": "BaseBdev2", 00:21:56.366 "aliases": [ 00:21:56.366 "f942fd16-109d-43e6-904c-308a0d334ede" 00:21:56.366 ], 00:21:56.366 "product_name": "Malloc disk", 00:21:56.366 "block_size": 512, 00:21:56.366 "num_blocks": 65536, 00:21:56.366 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:56.366 "assigned_rate_limits": { 00:21:56.366 "rw_ios_per_sec": 0, 00:21:56.366 "rw_mbytes_per_sec": 0, 00:21:56.366 "r_mbytes_per_sec": 0, 00:21:56.366 "w_mbytes_per_sec": 0 00:21:56.366 }, 00:21:56.366 "claimed": true, 00:21:56.366 "claim_type": "exclusive_write", 00:21:56.366 "zoned": false, 00:21:56.366 "supported_io_types": { 00:21:56.366 "read": true, 00:21:56.366 "write": true, 00:21:56.366 "unmap": true, 00:21:56.366 "flush": true, 00:21:56.366 "reset": true, 00:21:56.366 "nvme_admin": false, 00:21:56.366 "nvme_io": false, 00:21:56.366 "nvme_io_md": false, 00:21:56.366 "write_zeroes": true, 00:21:56.366 "zcopy": true, 00:21:56.366 "get_zone_info": false, 00:21:56.366 "zone_management": false, 00:21:56.366 "zone_append": false, 00:21:56.366 "compare": false, 00:21:56.366 "compare_and_write": false, 00:21:56.366 "abort": true, 00:21:56.366 "seek_hole": false, 00:21:56.366 "seek_data": false, 00:21:56.366 "copy": true, 00:21:56.366 "nvme_iov_md": false 00:21:56.366 }, 00:21:56.366 "memory_domains": [ 00:21:56.366 { 00:21:56.366 "dma_device_id": "system", 00:21:56.366 "dma_device_type": 1 00:21:56.366 }, 00:21:56.366 { 00:21:56.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.366 "dma_device_type": 2 00:21:56.366 } 00:21:56.366 ], 00:21:56.366 "driver_specific": {} 00:21:56.366 }' 00:21:56.366 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.366 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.366 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:56.366 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.623 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.973 "name": "BaseBdev3", 00:21:56.973 "aliases": [ 00:21:56.973 "eaa04faf-865e-408c-9b11-ef993ab3ff28" 00:21:56.973 ], 00:21:56.973 "product_name": "Malloc disk", 00:21:56.973 "block_size": 512, 00:21:56.973 "num_blocks": 65536, 00:21:56.973 "uuid": "eaa04faf-865e-408c-9b11-ef993ab3ff28", 00:21:56.973 "assigned_rate_limits": { 00:21:56.973 "rw_ios_per_sec": 0, 00:21:56.973 "rw_mbytes_per_sec": 0, 00:21:56.973 "r_mbytes_per_sec": 0, 00:21:56.973 "w_mbytes_per_sec": 0 00:21:56.973 }, 00:21:56.973 "claimed": true, 00:21:56.973 "claim_type": "exclusive_write", 00:21:56.973 "zoned": false, 00:21:56.973 "supported_io_types": { 00:21:56.973 "read": true, 00:21:56.973 "write": true, 00:21:56.973 "unmap": true, 00:21:56.973 "flush": true, 00:21:56.973 "reset": true, 00:21:56.973 "nvme_admin": false, 00:21:56.973 "nvme_io": false, 00:21:56.973 "nvme_io_md": false, 00:21:56.973 "write_zeroes": true, 00:21:56.973 "zcopy": true, 00:21:56.973 "get_zone_info": false, 00:21:56.973 "zone_management": false, 00:21:56.973 "zone_append": false, 00:21:56.973 "compare": false, 00:21:56.973 "compare_and_write": false, 00:21:56.973 "abort": true, 00:21:56.973 "seek_hole": false, 00:21:56.973 "seek_data": false, 00:21:56.973 "copy": true, 00:21:56.973 "nvme_iov_md": false 00:21:56.973 }, 00:21:56.973 "memory_domains": [ 00:21:56.973 { 00:21:56.973 "dma_device_id": "system", 00:21:56.973 "dma_device_type": 1 00:21:56.973 }, 00:21:56.973 { 00:21:56.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.973 "dma_device_type": 2 00:21:56.973 } 00:21:56.973 ], 00:21:56.973 "driver_specific": {} 00:21:56.973 }' 00:21:56.973 14:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.236 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.494 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:57.494 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.494 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.494 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:57.494 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:57.752 [2024-07-25 14:04:46.659185] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.752 14:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.010 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.010 "name": "Existed_Raid", 00:21:58.010 "uuid": "62b996b9-cbc4-46fe-ac0a-342551629a7e", 00:21:58.010 "strip_size_kb": 0, 00:21:58.010 "state": "online", 00:21:58.010 "raid_level": "raid1", 00:21:58.010 "superblock": false, 00:21:58.010 "num_base_bdevs": 3, 00:21:58.010 "num_base_bdevs_discovered": 2, 00:21:58.010 "num_base_bdevs_operational": 2, 00:21:58.010 "base_bdevs_list": [ 00:21:58.010 { 00:21:58.010 "name": null, 00:21:58.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.010 "is_configured": false, 00:21:58.010 "data_offset": 0, 00:21:58.010 "data_size": 65536 00:21:58.010 }, 00:21:58.010 { 00:21:58.010 "name": "BaseBdev2", 00:21:58.010 "uuid": "f942fd16-109d-43e6-904c-308a0d334ede", 00:21:58.010 "is_configured": true, 00:21:58.010 "data_offset": 0, 00:21:58.010 "data_size": 65536 00:21:58.010 }, 00:21:58.010 { 00:21:58.010 "name": "BaseBdev3", 00:21:58.010 "uuid": "eaa04faf-865e-408c-9b11-ef993ab3ff28", 00:21:58.010 "is_configured": true, 00:21:58.010 "data_offset": 0, 00:21:58.010 "data_size": 65536 00:21:58.010 } 00:21:58.010 ] 00:21:58.010 }' 00:21:58.010 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.010 14:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.944 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:58.944 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:58.944 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.944 14:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:59.202 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:59.202 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.202 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:59.460 [2024-07-25 14:04:48.269091] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.460 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:59.460 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:59.460 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:59.460 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.718 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:59.718 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.718 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:00.052 [2024-07-25 14:04:48.889620] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:00.052 [2024-07-25 14:04:48.889778] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.052 [2024-07-25 14:04:48.972869] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.052 [2024-07-25 14:04:48.972951] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.052 [2024-07-25 14:04:48.972967] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:22:00.052 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:00.052 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:00.052 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.052 14:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.310 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:00.568 BaseBdev2 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:00.568 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.827 14:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:01.084 [ 00:22:01.084 { 00:22:01.084 "name": "BaseBdev2", 00:22:01.084 "aliases": [ 00:22:01.084 "3ec3784c-67b5-45d3-8c7e-fa7313bc566d" 00:22:01.084 ], 00:22:01.084 "product_name": "Malloc disk", 00:22:01.084 "block_size": 512, 00:22:01.084 "num_blocks": 65536, 00:22:01.084 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:01.084 "assigned_rate_limits": { 00:22:01.084 "rw_ios_per_sec": 0, 00:22:01.084 "rw_mbytes_per_sec": 0, 00:22:01.084 "r_mbytes_per_sec": 0, 00:22:01.084 "w_mbytes_per_sec": 0 00:22:01.084 }, 00:22:01.084 "claimed": false, 00:22:01.084 "zoned": false, 00:22:01.084 "supported_io_types": { 00:22:01.084 "read": true, 00:22:01.084 "write": true, 00:22:01.084 "unmap": true, 00:22:01.084 "flush": true, 00:22:01.084 "reset": true, 00:22:01.084 "nvme_admin": false, 00:22:01.084 "nvme_io": false, 00:22:01.084 "nvme_io_md": false, 00:22:01.084 "write_zeroes": true, 00:22:01.084 "zcopy": true, 00:22:01.084 "get_zone_info": false, 00:22:01.084 "zone_management": false, 00:22:01.084 "zone_append": false, 00:22:01.085 "compare": false, 00:22:01.085 "compare_and_write": false, 00:22:01.085 "abort": true, 00:22:01.085 "seek_hole": false, 00:22:01.085 "seek_data": false, 00:22:01.085 "copy": true, 00:22:01.085 "nvme_iov_md": false 00:22:01.085 }, 00:22:01.085 "memory_domains": [ 00:22:01.085 { 00:22:01.085 "dma_device_id": "system", 00:22:01.085 "dma_device_type": 1 00:22:01.085 }, 00:22:01.085 { 00:22:01.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.085 "dma_device_type": 2 00:22:01.085 } 00:22:01.085 ], 00:22:01.085 "driver_specific": {} 00:22:01.085 } 00:22:01.085 ] 00:22:01.085 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:01.085 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:01.085 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:01.085 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:01.343 BaseBdev3 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:01.343 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.601 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:01.859 [ 00:22:01.859 { 00:22:01.859 "name": "BaseBdev3", 00:22:01.859 "aliases": [ 00:22:01.859 "cd0d795b-d1db-4287-9b27-e1002c0b0913" 00:22:01.859 ], 00:22:01.859 "product_name": "Malloc disk", 00:22:01.859 "block_size": 512, 00:22:01.859 "num_blocks": 65536, 00:22:01.859 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:01.859 "assigned_rate_limits": { 00:22:01.859 "rw_ios_per_sec": 0, 00:22:01.859 "rw_mbytes_per_sec": 0, 00:22:01.859 "r_mbytes_per_sec": 0, 00:22:01.859 "w_mbytes_per_sec": 0 00:22:01.859 }, 00:22:01.859 "claimed": false, 00:22:01.859 "zoned": false, 00:22:01.859 "supported_io_types": { 00:22:01.859 "read": true, 00:22:01.859 "write": true, 00:22:01.859 "unmap": true, 00:22:01.859 "flush": true, 00:22:01.859 "reset": true, 00:22:01.859 "nvme_admin": false, 00:22:01.859 "nvme_io": false, 00:22:01.859 "nvme_io_md": false, 00:22:01.859 "write_zeroes": true, 00:22:01.859 "zcopy": true, 00:22:01.859 "get_zone_info": false, 00:22:01.859 "zone_management": false, 00:22:01.859 "zone_append": false, 00:22:01.859 "compare": false, 00:22:01.859 "compare_and_write": false, 00:22:01.859 "abort": true, 00:22:01.859 "seek_hole": false, 00:22:01.859 "seek_data": false, 00:22:01.859 "copy": true, 00:22:01.859 "nvme_iov_md": false 00:22:01.859 }, 00:22:01.859 "memory_domains": [ 00:22:01.859 { 00:22:01.859 "dma_device_id": "system", 00:22:01.859 "dma_device_type": 1 00:22:01.859 }, 00:22:01.859 { 00:22:01.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.859 "dma_device_type": 2 00:22:01.859 } 00:22:01.860 ], 00:22:01.860 "driver_specific": {} 00:22:01.860 } 00:22:01.860 ] 00:22:02.117 14:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:02.117 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:02.117 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:02.117 14:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:02.376 [2024-07-25 14:04:51.171475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.376 [2024-07-25 14:04:51.171603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.376 [2024-07-25 14:04:51.171654] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.376 [2024-07-25 14:04:51.174686] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.376 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.634 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.634 "name": "Existed_Raid", 00:22:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.634 "strip_size_kb": 0, 00:22:02.634 "state": "configuring", 00:22:02.634 "raid_level": "raid1", 00:22:02.634 "superblock": false, 00:22:02.634 "num_base_bdevs": 3, 00:22:02.634 "num_base_bdevs_discovered": 2, 00:22:02.634 "num_base_bdevs_operational": 3, 00:22:02.634 "base_bdevs_list": [ 00:22:02.634 { 00:22:02.634 "name": "BaseBdev1", 00:22:02.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.634 "is_configured": false, 00:22:02.634 "data_offset": 0, 00:22:02.634 "data_size": 0 00:22:02.634 }, 00:22:02.634 { 00:22:02.634 "name": "BaseBdev2", 00:22:02.634 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:02.634 "is_configured": true, 00:22:02.634 "data_offset": 0, 00:22:02.634 "data_size": 65536 00:22:02.634 }, 00:22:02.634 { 00:22:02.634 "name": "BaseBdev3", 00:22:02.634 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:02.634 "is_configured": true, 00:22:02.634 "data_offset": 0, 00:22:02.634 "data_size": 65536 00:22:02.634 } 00:22:02.634 ] 00:22:02.634 }' 00:22:02.634 14:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.634 14:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.251 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:03.509 [2024-07-25 14:04:52.363642] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.509 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.767 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.767 "name": "Existed_Raid", 00:22:03.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.767 "strip_size_kb": 0, 00:22:03.767 "state": "configuring", 00:22:03.767 "raid_level": "raid1", 00:22:03.767 "superblock": false, 00:22:03.767 "num_base_bdevs": 3, 00:22:03.767 "num_base_bdevs_discovered": 1, 00:22:03.767 "num_base_bdevs_operational": 3, 00:22:03.767 "base_bdevs_list": [ 00:22:03.767 { 00:22:03.767 "name": "BaseBdev1", 00:22:03.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.767 "is_configured": false, 00:22:03.767 "data_offset": 0, 00:22:03.767 "data_size": 0 00:22:03.767 }, 00:22:03.767 { 00:22:03.767 "name": null, 00:22:03.767 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:03.767 "is_configured": false, 00:22:03.767 "data_offset": 0, 00:22:03.767 "data_size": 65536 00:22:03.767 }, 00:22:03.767 { 00:22:03.767 "name": "BaseBdev3", 00:22:03.767 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:03.767 "is_configured": true, 00:22:03.767 "data_offset": 0, 00:22:03.767 "data_size": 65536 00:22:03.767 } 00:22:03.767 ] 00:22:03.767 }' 00:22:03.767 14:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.767 14:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.332 14:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.332 14:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:04.896 14:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:04.896 14:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:05.153 [2024-07-25 14:04:53.955209] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.153 BaseBdev1 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:05.153 14:04:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:05.411 14:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:05.669 [ 00:22:05.669 { 00:22:05.669 "name": "BaseBdev1", 00:22:05.669 "aliases": [ 00:22:05.669 "61e4609a-cdad-4d6b-a324-52abb207f017" 00:22:05.669 ], 00:22:05.669 "product_name": "Malloc disk", 00:22:05.669 "block_size": 512, 00:22:05.669 "num_blocks": 65536, 00:22:05.669 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:05.669 "assigned_rate_limits": { 00:22:05.669 "rw_ios_per_sec": 0, 00:22:05.669 "rw_mbytes_per_sec": 0, 00:22:05.669 "r_mbytes_per_sec": 0, 00:22:05.669 "w_mbytes_per_sec": 0 00:22:05.669 }, 00:22:05.669 "claimed": true, 00:22:05.669 "claim_type": "exclusive_write", 00:22:05.669 "zoned": false, 00:22:05.669 "supported_io_types": { 00:22:05.669 "read": true, 00:22:05.669 "write": true, 00:22:05.669 "unmap": true, 00:22:05.669 "flush": true, 00:22:05.669 "reset": true, 00:22:05.669 "nvme_admin": false, 00:22:05.669 "nvme_io": false, 00:22:05.669 "nvme_io_md": false, 00:22:05.669 "write_zeroes": true, 00:22:05.669 "zcopy": true, 00:22:05.669 "get_zone_info": false, 00:22:05.669 "zone_management": false, 00:22:05.669 "zone_append": false, 00:22:05.669 "compare": false, 00:22:05.669 "compare_and_write": false, 00:22:05.669 "abort": true, 00:22:05.669 "seek_hole": false, 00:22:05.669 "seek_data": false, 00:22:05.669 "copy": true, 00:22:05.669 "nvme_iov_md": false 00:22:05.669 }, 00:22:05.669 "memory_domains": [ 00:22:05.669 { 00:22:05.669 "dma_device_id": "system", 00:22:05.669 "dma_device_type": 1 00:22:05.669 }, 00:22:05.669 { 00:22:05.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.669 "dma_device_type": 2 00:22:05.669 } 00:22:05.669 ], 00:22:05.669 "driver_specific": {} 00:22:05.669 } 00:22:05.669 ] 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.669 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.926 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.926 "name": "Existed_Raid", 00:22:05.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.926 "strip_size_kb": 0, 00:22:05.926 "state": "configuring", 00:22:05.926 "raid_level": "raid1", 00:22:05.926 "superblock": false, 00:22:05.926 "num_base_bdevs": 3, 00:22:05.926 "num_base_bdevs_discovered": 2, 00:22:05.926 "num_base_bdevs_operational": 3, 00:22:05.926 "base_bdevs_list": [ 00:22:05.926 { 00:22:05.926 "name": "BaseBdev1", 00:22:05.926 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:05.926 "is_configured": true, 00:22:05.927 "data_offset": 0, 00:22:05.927 "data_size": 65536 00:22:05.927 }, 00:22:05.927 { 00:22:05.927 "name": null, 00:22:05.927 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:05.927 "is_configured": false, 00:22:05.927 "data_offset": 0, 00:22:05.927 "data_size": 65536 00:22:05.927 }, 00:22:05.927 { 00:22:05.927 "name": "BaseBdev3", 00:22:05.927 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:05.927 "is_configured": true, 00:22:05.927 "data_offset": 0, 00:22:05.927 "data_size": 65536 00:22:05.927 } 00:22:05.927 ] 00:22:05.927 }' 00:22:05.927 14:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.927 14:04:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.493 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.493 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:06.749 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:06.749 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:07.006 [2024-07-25 14:04:55.911713] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.006 14:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.263 14:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.263 "name": "Existed_Raid", 00:22:07.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.263 "strip_size_kb": 0, 00:22:07.263 "state": "configuring", 00:22:07.263 "raid_level": "raid1", 00:22:07.263 "superblock": false, 00:22:07.263 "num_base_bdevs": 3, 00:22:07.263 "num_base_bdevs_discovered": 1, 00:22:07.263 "num_base_bdevs_operational": 3, 00:22:07.263 "base_bdevs_list": [ 00:22:07.263 { 00:22:07.263 "name": "BaseBdev1", 00:22:07.263 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:07.263 "is_configured": true, 00:22:07.263 "data_offset": 0, 00:22:07.263 "data_size": 65536 00:22:07.263 }, 00:22:07.263 { 00:22:07.263 "name": null, 00:22:07.263 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:07.263 "is_configured": false, 00:22:07.263 "data_offset": 0, 00:22:07.263 "data_size": 65536 00:22:07.263 }, 00:22:07.263 { 00:22:07.263 "name": null, 00:22:07.263 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:07.263 "is_configured": false, 00:22:07.263 "data_offset": 0, 00:22:07.263 "data_size": 65536 00:22:07.263 } 00:22:07.263 ] 00:22:07.263 }' 00:22:07.263 14:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.263 14:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.196 14:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.196 14:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:08.196 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:08.196 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:08.454 [2024-07-25 14:04:57.496082] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.712 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.970 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.970 "name": "Existed_Raid", 00:22:08.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.970 "strip_size_kb": 0, 00:22:08.970 "state": "configuring", 00:22:08.970 "raid_level": "raid1", 00:22:08.970 "superblock": false, 00:22:08.970 "num_base_bdevs": 3, 00:22:08.970 "num_base_bdevs_discovered": 2, 00:22:08.970 "num_base_bdevs_operational": 3, 00:22:08.970 "base_bdevs_list": [ 00:22:08.970 { 00:22:08.970 "name": "BaseBdev1", 00:22:08.970 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:08.970 "is_configured": true, 00:22:08.970 "data_offset": 0, 00:22:08.970 "data_size": 65536 00:22:08.970 }, 00:22:08.970 { 00:22:08.970 "name": null, 00:22:08.970 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:08.970 "is_configured": false, 00:22:08.970 "data_offset": 0, 00:22:08.970 "data_size": 65536 00:22:08.970 }, 00:22:08.971 { 00:22:08.971 "name": "BaseBdev3", 00:22:08.971 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:08.971 "is_configured": true, 00:22:08.971 "data_offset": 0, 00:22:08.971 "data_size": 65536 00:22:08.971 } 00:22:08.971 ] 00:22:08.971 }' 00:22:08.971 14:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.971 14:04:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.564 14:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:09.564 14:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.822 14:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:09.822 14:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:10.079 [2024-07-25 14:04:59.004448] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.079 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.335 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.335 "name": "Existed_Raid", 00:22:10.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.335 "strip_size_kb": 0, 00:22:10.335 "state": "configuring", 00:22:10.335 "raid_level": "raid1", 00:22:10.335 "superblock": false, 00:22:10.335 "num_base_bdevs": 3, 00:22:10.335 "num_base_bdevs_discovered": 1, 00:22:10.335 "num_base_bdevs_operational": 3, 00:22:10.335 "base_bdevs_list": [ 00:22:10.335 { 00:22:10.335 "name": null, 00:22:10.335 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:10.335 "is_configured": false, 00:22:10.335 "data_offset": 0, 00:22:10.335 "data_size": 65536 00:22:10.335 }, 00:22:10.335 { 00:22:10.335 "name": null, 00:22:10.335 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:10.335 "is_configured": false, 00:22:10.335 "data_offset": 0, 00:22:10.335 "data_size": 65536 00:22:10.335 }, 00:22:10.335 { 00:22:10.335 "name": "BaseBdev3", 00:22:10.335 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:10.335 "is_configured": true, 00:22:10.335 "data_offset": 0, 00:22:10.335 "data_size": 65536 00:22:10.335 } 00:22:10.335 ] 00:22:10.335 }' 00:22:10.335 14:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.335 14:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.267 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:11.267 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.526 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:11.526 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:11.787 [2024-07-25 14:05:00.611468] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.787 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.044 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.044 "name": "Existed_Raid", 00:22:12.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.044 "strip_size_kb": 0, 00:22:12.044 "state": "configuring", 00:22:12.044 "raid_level": "raid1", 00:22:12.044 "superblock": false, 00:22:12.044 "num_base_bdevs": 3, 00:22:12.044 "num_base_bdevs_discovered": 2, 00:22:12.044 "num_base_bdevs_operational": 3, 00:22:12.044 "base_bdevs_list": [ 00:22:12.044 { 00:22:12.044 "name": null, 00:22:12.044 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:12.044 "is_configured": false, 00:22:12.044 "data_offset": 0, 00:22:12.044 "data_size": 65536 00:22:12.044 }, 00:22:12.044 { 00:22:12.044 "name": "BaseBdev2", 00:22:12.044 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:12.044 "is_configured": true, 00:22:12.044 "data_offset": 0, 00:22:12.044 "data_size": 65536 00:22:12.044 }, 00:22:12.044 { 00:22:12.044 "name": "BaseBdev3", 00:22:12.044 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:12.044 "is_configured": true, 00:22:12.045 "data_offset": 0, 00:22:12.045 "data_size": 65536 00:22:12.045 } 00:22:12.045 ] 00:22:12.045 }' 00:22:12.045 14:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.045 14:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.609 14:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.609 14:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.867 14:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:12.867 14:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.867 14:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.124 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 61e4609a-cdad-4d6b-a324-52abb207f017 00:22:13.381 [2024-07-25 14:05:02.366830] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.381 [2024-07-25 14:05:02.367185] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:22:13.381 [2024-07-25 14:05:02.367235] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:13.381 [2024-07-25 14:05:02.367470] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:13.381 [2024-07-25 14:05:02.367972] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:22:13.381 [2024-07-25 14:05:02.368102] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:22:13.381 [2024-07-25 14:05:02.368471] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.381 NewBaseBdev 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:13.381 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.638 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.895 [ 00:22:13.895 { 00:22:13.895 "name": "NewBaseBdev", 00:22:13.895 "aliases": [ 00:22:13.895 "61e4609a-cdad-4d6b-a324-52abb207f017" 00:22:13.895 ], 00:22:13.895 "product_name": "Malloc disk", 00:22:13.895 "block_size": 512, 00:22:13.895 "num_blocks": 65536, 00:22:13.895 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:13.895 "assigned_rate_limits": { 00:22:13.895 "rw_ios_per_sec": 0, 00:22:13.895 "rw_mbytes_per_sec": 0, 00:22:13.895 "r_mbytes_per_sec": 0, 00:22:13.895 "w_mbytes_per_sec": 0 00:22:13.895 }, 00:22:13.895 "claimed": true, 00:22:13.895 "claim_type": "exclusive_write", 00:22:13.895 "zoned": false, 00:22:13.895 "supported_io_types": { 00:22:13.895 "read": true, 00:22:13.895 "write": true, 00:22:13.895 "unmap": true, 00:22:13.895 "flush": true, 00:22:13.895 "reset": true, 00:22:13.895 "nvme_admin": false, 00:22:13.895 "nvme_io": false, 00:22:13.895 "nvme_io_md": false, 00:22:13.895 "write_zeroes": true, 00:22:13.895 "zcopy": true, 00:22:13.895 "get_zone_info": false, 00:22:13.895 "zone_management": false, 00:22:13.895 "zone_append": false, 00:22:13.895 "compare": false, 00:22:13.895 "compare_and_write": false, 00:22:13.895 "abort": true, 00:22:13.895 "seek_hole": false, 00:22:13.895 "seek_data": false, 00:22:13.895 "copy": true, 00:22:13.895 "nvme_iov_md": false 00:22:13.895 }, 00:22:13.895 "memory_domains": [ 00:22:13.895 { 00:22:13.895 "dma_device_id": "system", 00:22:13.895 "dma_device_type": 1 00:22:13.895 }, 00:22:13.895 { 00:22:13.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.895 "dma_device_type": 2 00:22:13.895 } 00:22:13.895 ], 00:22:13.895 "driver_specific": {} 00:22:13.895 } 00:22:13.895 ] 00:22:13.895 14:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:13.895 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:13.895 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.895 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.895 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.896 14:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.153 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.153 "name": "Existed_Raid", 00:22:14.153 "uuid": "59a1dbcc-ed63-418e-8170-70faa7972daa", 00:22:14.153 "strip_size_kb": 0, 00:22:14.153 "state": "online", 00:22:14.153 "raid_level": "raid1", 00:22:14.153 "superblock": false, 00:22:14.153 "num_base_bdevs": 3, 00:22:14.153 "num_base_bdevs_discovered": 3, 00:22:14.153 "num_base_bdevs_operational": 3, 00:22:14.153 "base_bdevs_list": [ 00:22:14.153 { 00:22:14.153 "name": "NewBaseBdev", 00:22:14.153 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:14.153 "is_configured": true, 00:22:14.153 "data_offset": 0, 00:22:14.153 "data_size": 65536 00:22:14.153 }, 00:22:14.153 { 00:22:14.153 "name": "BaseBdev2", 00:22:14.153 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:14.153 "is_configured": true, 00:22:14.153 "data_offset": 0, 00:22:14.153 "data_size": 65536 00:22:14.153 }, 00:22:14.153 { 00:22:14.153 "name": "BaseBdev3", 00:22:14.153 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:14.153 "is_configured": true, 00:22:14.153 "data_offset": 0, 00:22:14.153 "data_size": 65536 00:22:14.153 } 00:22:14.153 ] 00:22:14.153 }' 00:22:14.153 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.153 14:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:15.084 14:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:15.084 [2024-07-25 14:05:04.051577] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.084 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:15.084 "name": "Existed_Raid", 00:22:15.084 "aliases": [ 00:22:15.084 "59a1dbcc-ed63-418e-8170-70faa7972daa" 00:22:15.084 ], 00:22:15.084 "product_name": "Raid Volume", 00:22:15.084 "block_size": 512, 00:22:15.084 "num_blocks": 65536, 00:22:15.084 "uuid": "59a1dbcc-ed63-418e-8170-70faa7972daa", 00:22:15.084 "assigned_rate_limits": { 00:22:15.084 "rw_ios_per_sec": 0, 00:22:15.084 "rw_mbytes_per_sec": 0, 00:22:15.084 "r_mbytes_per_sec": 0, 00:22:15.084 "w_mbytes_per_sec": 0 00:22:15.084 }, 00:22:15.084 "claimed": false, 00:22:15.084 "zoned": false, 00:22:15.084 "supported_io_types": { 00:22:15.084 "read": true, 00:22:15.084 "write": true, 00:22:15.084 "unmap": false, 00:22:15.084 "flush": false, 00:22:15.084 "reset": true, 00:22:15.084 "nvme_admin": false, 00:22:15.084 "nvme_io": false, 00:22:15.084 "nvme_io_md": false, 00:22:15.084 "write_zeroes": true, 00:22:15.084 "zcopy": false, 00:22:15.084 "get_zone_info": false, 00:22:15.084 "zone_management": false, 00:22:15.084 "zone_append": false, 00:22:15.084 "compare": false, 00:22:15.084 "compare_and_write": false, 00:22:15.084 "abort": false, 00:22:15.084 "seek_hole": false, 00:22:15.084 "seek_data": false, 00:22:15.084 "copy": false, 00:22:15.084 "nvme_iov_md": false 00:22:15.084 }, 00:22:15.084 "memory_domains": [ 00:22:15.084 { 00:22:15.084 "dma_device_id": "system", 00:22:15.084 "dma_device_type": 1 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.084 "dma_device_type": 2 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "dma_device_id": "system", 00:22:15.084 "dma_device_type": 1 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.084 "dma_device_type": 2 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "dma_device_id": "system", 00:22:15.084 "dma_device_type": 1 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.084 "dma_device_type": 2 00:22:15.084 } 00:22:15.084 ], 00:22:15.084 "driver_specific": { 00:22:15.084 "raid": { 00:22:15.084 "uuid": "59a1dbcc-ed63-418e-8170-70faa7972daa", 00:22:15.084 "strip_size_kb": 0, 00:22:15.084 "state": "online", 00:22:15.084 "raid_level": "raid1", 00:22:15.084 "superblock": false, 00:22:15.084 "num_base_bdevs": 3, 00:22:15.084 "num_base_bdevs_discovered": 3, 00:22:15.084 "num_base_bdevs_operational": 3, 00:22:15.084 "base_bdevs_list": [ 00:22:15.084 { 00:22:15.084 "name": "NewBaseBdev", 00:22:15.084 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:15.084 "is_configured": true, 00:22:15.084 "data_offset": 0, 00:22:15.084 "data_size": 65536 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "name": "BaseBdev2", 00:22:15.084 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:15.084 "is_configured": true, 00:22:15.084 "data_offset": 0, 00:22:15.084 "data_size": 65536 00:22:15.084 }, 00:22:15.084 { 00:22:15.084 "name": "BaseBdev3", 00:22:15.084 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:15.084 "is_configured": true, 00:22:15.084 "data_offset": 0, 00:22:15.084 "data_size": 65536 00:22:15.084 } 00:22:15.084 ] 00:22:15.084 } 00:22:15.084 } 00:22:15.084 }' 00:22:15.084 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.084 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:15.084 BaseBdev2 00:22:15.084 BaseBdev3' 00:22:15.084 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.084 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:15.341 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.599 "name": "NewBaseBdev", 00:22:15.599 "aliases": [ 00:22:15.599 "61e4609a-cdad-4d6b-a324-52abb207f017" 00:22:15.599 ], 00:22:15.599 "product_name": "Malloc disk", 00:22:15.599 "block_size": 512, 00:22:15.599 "num_blocks": 65536, 00:22:15.599 "uuid": "61e4609a-cdad-4d6b-a324-52abb207f017", 00:22:15.599 "assigned_rate_limits": { 00:22:15.599 "rw_ios_per_sec": 0, 00:22:15.599 "rw_mbytes_per_sec": 0, 00:22:15.599 "r_mbytes_per_sec": 0, 00:22:15.599 "w_mbytes_per_sec": 0 00:22:15.599 }, 00:22:15.599 "claimed": true, 00:22:15.599 "claim_type": "exclusive_write", 00:22:15.599 "zoned": false, 00:22:15.599 "supported_io_types": { 00:22:15.599 "read": true, 00:22:15.599 "write": true, 00:22:15.599 "unmap": true, 00:22:15.599 "flush": true, 00:22:15.599 "reset": true, 00:22:15.599 "nvme_admin": false, 00:22:15.599 "nvme_io": false, 00:22:15.599 "nvme_io_md": false, 00:22:15.599 "write_zeroes": true, 00:22:15.599 "zcopy": true, 00:22:15.599 "get_zone_info": false, 00:22:15.599 "zone_management": false, 00:22:15.599 "zone_append": false, 00:22:15.599 "compare": false, 00:22:15.599 "compare_and_write": false, 00:22:15.599 "abort": true, 00:22:15.599 "seek_hole": false, 00:22:15.599 "seek_data": false, 00:22:15.599 "copy": true, 00:22:15.599 "nvme_iov_md": false 00:22:15.599 }, 00:22:15.599 "memory_domains": [ 00:22:15.599 { 00:22:15.599 "dma_device_id": "system", 00:22:15.599 "dma_device_type": 1 00:22:15.599 }, 00:22:15.599 { 00:22:15.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.599 "dma_device_type": 2 00:22:15.599 } 00:22:15.599 ], 00:22:15.599 "driver_specific": {} 00:22:15.599 }' 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.599 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:15.856 14:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.115 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.115 "name": "BaseBdev2", 00:22:16.115 "aliases": [ 00:22:16.115 "3ec3784c-67b5-45d3-8c7e-fa7313bc566d" 00:22:16.115 ], 00:22:16.115 "product_name": "Malloc disk", 00:22:16.115 "block_size": 512, 00:22:16.115 "num_blocks": 65536, 00:22:16.115 "uuid": "3ec3784c-67b5-45d3-8c7e-fa7313bc566d", 00:22:16.115 "assigned_rate_limits": { 00:22:16.115 "rw_ios_per_sec": 0, 00:22:16.115 "rw_mbytes_per_sec": 0, 00:22:16.115 "r_mbytes_per_sec": 0, 00:22:16.115 "w_mbytes_per_sec": 0 00:22:16.115 }, 00:22:16.115 "claimed": true, 00:22:16.115 "claim_type": "exclusive_write", 00:22:16.115 "zoned": false, 00:22:16.115 "supported_io_types": { 00:22:16.115 "read": true, 00:22:16.115 "write": true, 00:22:16.115 "unmap": true, 00:22:16.115 "flush": true, 00:22:16.115 "reset": true, 00:22:16.115 "nvme_admin": false, 00:22:16.115 "nvme_io": false, 00:22:16.115 "nvme_io_md": false, 00:22:16.115 "write_zeroes": true, 00:22:16.115 "zcopy": true, 00:22:16.115 "get_zone_info": false, 00:22:16.115 "zone_management": false, 00:22:16.115 "zone_append": false, 00:22:16.115 "compare": false, 00:22:16.115 "compare_and_write": false, 00:22:16.115 "abort": true, 00:22:16.115 "seek_hole": false, 00:22:16.115 "seek_data": false, 00:22:16.115 "copy": true, 00:22:16.115 "nvme_iov_md": false 00:22:16.115 }, 00:22:16.115 "memory_domains": [ 00:22:16.115 { 00:22:16.115 "dma_device_id": "system", 00:22:16.115 "dma_device_type": 1 00:22:16.115 }, 00:22:16.115 { 00:22:16.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.115 "dma_device_type": 2 00:22:16.115 } 00:22:16.115 ], 00:22:16.115 "driver_specific": {} 00:22:16.115 }' 00:22:16.115 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.115 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.115 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.115 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.375 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.376 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.633 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.633 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:16.633 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:16.633 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.890 "name": "BaseBdev3", 00:22:16.890 "aliases": [ 00:22:16.890 "cd0d795b-d1db-4287-9b27-e1002c0b0913" 00:22:16.890 ], 00:22:16.890 "product_name": "Malloc disk", 00:22:16.890 "block_size": 512, 00:22:16.890 "num_blocks": 65536, 00:22:16.890 "uuid": "cd0d795b-d1db-4287-9b27-e1002c0b0913", 00:22:16.890 "assigned_rate_limits": { 00:22:16.890 "rw_ios_per_sec": 0, 00:22:16.890 "rw_mbytes_per_sec": 0, 00:22:16.890 "r_mbytes_per_sec": 0, 00:22:16.890 "w_mbytes_per_sec": 0 00:22:16.890 }, 00:22:16.890 "claimed": true, 00:22:16.890 "claim_type": "exclusive_write", 00:22:16.890 "zoned": false, 00:22:16.890 "supported_io_types": { 00:22:16.890 "read": true, 00:22:16.890 "write": true, 00:22:16.890 "unmap": true, 00:22:16.890 "flush": true, 00:22:16.890 "reset": true, 00:22:16.890 "nvme_admin": false, 00:22:16.890 "nvme_io": false, 00:22:16.890 "nvme_io_md": false, 00:22:16.890 "write_zeroes": true, 00:22:16.890 "zcopy": true, 00:22:16.890 "get_zone_info": false, 00:22:16.890 "zone_management": false, 00:22:16.890 "zone_append": false, 00:22:16.890 "compare": false, 00:22:16.890 "compare_and_write": false, 00:22:16.890 "abort": true, 00:22:16.890 "seek_hole": false, 00:22:16.890 "seek_data": false, 00:22:16.890 "copy": true, 00:22:16.890 "nvme_iov_md": false 00:22:16.890 }, 00:22:16.890 "memory_domains": [ 00:22:16.890 { 00:22:16.890 "dma_device_id": "system", 00:22:16.890 "dma_device_type": 1 00:22:16.890 }, 00:22:16.890 { 00:22:16.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.890 "dma_device_type": 2 00:22:16.890 } 00:22:16.890 ], 00:22:16.890 "driver_specific": {} 00:22:16.890 }' 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.890 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.148 14:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.148 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.148 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.148 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.148 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.148 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:17.406 [2024-07-25 14:05:06.399734] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.406 [2024-07-25 14:05:06.400011] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.406 [2024-07-25 14:05:06.400222] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.406 [2024-07-25 14:05:06.400648] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.406 [2024-07-25 14:05:06.400781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 130918 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 130918 ']' 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 130918 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130918 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130918' 00:22:17.406 killing process with pid 130918 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 130918 00:22:17.406 [2024-07-25 14:05:06.444145] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.406 14:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 130918 00:22:17.663 [2024-07-25 14:05:06.693370] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:19.035 00:22:19.035 real 0m33.245s 00:22:19.035 user 1m1.882s 00:22:19.035 sys 0m3.731s 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.035 ************************************ 00:22:19.035 END TEST raid_state_function_test 00:22:19.035 ************************************ 00:22:19.035 14:05:07 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:22:19.035 14:05:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:19.035 14:05:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.035 14:05:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.035 ************************************ 00:22:19.035 START TEST raid_state_function_test_sb 00:22:19.035 ************************************ 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=131930 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131930' 00:22:19.035 Process raid pid: 131930 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 131930 /var/tmp/spdk-raid.sock 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 131930 ']' 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.035 14:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.035 [2024-07-25 14:05:08.038726] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:22:19.035 [2024-07-25 14:05:08.039226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.293 [2024-07-25 14:05:08.213970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.558 [2024-07-25 14:05:08.478509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.829 [2024-07-25 14:05:08.698012] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.087 14:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.087 14:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:20.087 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:20.344 [2024-07-25 14:05:09.360338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:20.344 [2024-07-25 14:05:09.360666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:20.344 [2024-07-25 14:05:09.360792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:20.344 [2024-07-25 14:05:09.360945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:20.344 [2024-07-25 14:05:09.361069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:20.344 [2024-07-25 14:05:09.361132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.344 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.910 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.910 "name": "Existed_Raid", 00:22:20.910 "uuid": "0f3ad6c3-6c2d-49bf-9d7d-fbadb84db8bf", 00:22:20.910 "strip_size_kb": 0, 00:22:20.910 "state": "configuring", 00:22:20.910 "raid_level": "raid1", 00:22:20.910 "superblock": true, 00:22:20.910 "num_base_bdevs": 3, 00:22:20.910 "num_base_bdevs_discovered": 0, 00:22:20.910 "num_base_bdevs_operational": 3, 00:22:20.910 "base_bdevs_list": [ 00:22:20.910 { 00:22:20.910 "name": "BaseBdev1", 00:22:20.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.910 "is_configured": false, 00:22:20.910 "data_offset": 0, 00:22:20.910 "data_size": 0 00:22:20.910 }, 00:22:20.910 { 00:22:20.910 "name": "BaseBdev2", 00:22:20.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.910 "is_configured": false, 00:22:20.910 "data_offset": 0, 00:22:20.910 "data_size": 0 00:22:20.910 }, 00:22:20.910 { 00:22:20.910 "name": "BaseBdev3", 00:22:20.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.910 "is_configured": false, 00:22:20.910 "data_offset": 0, 00:22:20.910 "data_size": 0 00:22:20.910 } 00:22:20.910 ] 00:22:20.910 }' 00:22:20.910 14:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.910 14:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.475 14:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:21.732 [2024-07-25 14:05:10.704400] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:21.732 [2024-07-25 14:05:10.704721] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:22:21.732 14:05:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:21.990 [2024-07-25 14:05:11.000484] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:21.990 [2024-07-25 14:05:11.000780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:21.990 [2024-07-25 14:05:11.000931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:21.990 [2024-07-25 14:05:11.001077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:21.990 [2024-07-25 14:05:11.001183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:21.990 [2024-07-25 14:05:11.001310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:21.990 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:22.555 [2024-07-25 14:05:11.364711] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.555 BaseBdev1 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:22.555 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.852 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:23.149 [ 00:22:23.149 { 00:22:23.149 "name": "BaseBdev1", 00:22:23.149 "aliases": [ 00:22:23.149 "26695c36-adf2-49fb-b63c-b9c057775106" 00:22:23.149 ], 00:22:23.149 "product_name": "Malloc disk", 00:22:23.149 "block_size": 512, 00:22:23.149 "num_blocks": 65536, 00:22:23.149 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:23.149 "assigned_rate_limits": { 00:22:23.149 "rw_ios_per_sec": 0, 00:22:23.149 "rw_mbytes_per_sec": 0, 00:22:23.149 "r_mbytes_per_sec": 0, 00:22:23.149 "w_mbytes_per_sec": 0 00:22:23.149 }, 00:22:23.149 "claimed": true, 00:22:23.149 "claim_type": "exclusive_write", 00:22:23.149 "zoned": false, 00:22:23.149 "supported_io_types": { 00:22:23.149 "read": true, 00:22:23.149 "write": true, 00:22:23.149 "unmap": true, 00:22:23.149 "flush": true, 00:22:23.149 "reset": true, 00:22:23.149 "nvme_admin": false, 00:22:23.149 "nvme_io": false, 00:22:23.149 "nvme_io_md": false, 00:22:23.149 "write_zeroes": true, 00:22:23.149 "zcopy": true, 00:22:23.149 "get_zone_info": false, 00:22:23.149 "zone_management": false, 00:22:23.149 "zone_append": false, 00:22:23.149 "compare": false, 00:22:23.149 "compare_and_write": false, 00:22:23.149 "abort": true, 00:22:23.149 "seek_hole": false, 00:22:23.149 "seek_data": false, 00:22:23.149 "copy": true, 00:22:23.149 "nvme_iov_md": false 00:22:23.149 }, 00:22:23.149 "memory_domains": [ 00:22:23.149 { 00:22:23.149 "dma_device_id": "system", 00:22:23.149 "dma_device_type": 1 00:22:23.149 }, 00:22:23.149 { 00:22:23.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.149 "dma_device_type": 2 00:22:23.149 } 00:22:23.149 ], 00:22:23.149 "driver_specific": {} 00:22:23.149 } 00:22:23.149 ] 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.149 14:05:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.407 14:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.407 "name": "Existed_Raid", 00:22:23.407 "uuid": "c0a07b5d-5703-49c9-961a-b2dc63a475da", 00:22:23.407 "strip_size_kb": 0, 00:22:23.407 "state": "configuring", 00:22:23.407 "raid_level": "raid1", 00:22:23.407 "superblock": true, 00:22:23.407 "num_base_bdevs": 3, 00:22:23.407 "num_base_bdevs_discovered": 1, 00:22:23.407 "num_base_bdevs_operational": 3, 00:22:23.407 "base_bdevs_list": [ 00:22:23.407 { 00:22:23.407 "name": "BaseBdev1", 00:22:23.407 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:23.407 "is_configured": true, 00:22:23.407 "data_offset": 2048, 00:22:23.407 "data_size": 63488 00:22:23.407 }, 00:22:23.407 { 00:22:23.407 "name": "BaseBdev2", 00:22:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.407 "is_configured": false, 00:22:23.407 "data_offset": 0, 00:22:23.407 "data_size": 0 00:22:23.407 }, 00:22:23.407 { 00:22:23.407 "name": "BaseBdev3", 00:22:23.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.407 "is_configured": false, 00:22:23.407 "data_offset": 0, 00:22:23.407 "data_size": 0 00:22:23.407 } 00:22:23.407 ] 00:22:23.407 }' 00:22:23.407 14:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.407 14:05:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.972 14:05:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:24.230 [2024-07-25 14:05:13.149208] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:24.230 [2024-07-25 14:05:13.149504] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:22:24.230 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:24.487 [2024-07-25 14:05:13.441308] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.487 [2024-07-25 14:05:13.443712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:24.487 [2024-07-25 14:05:13.443922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:24.487 [2024-07-25 14:05:13.444052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:24.487 [2024-07-25 14:05:13.444149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.487 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.744 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.744 "name": "Existed_Raid", 00:22:24.744 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:24.744 "strip_size_kb": 0, 00:22:24.744 "state": "configuring", 00:22:24.744 "raid_level": "raid1", 00:22:24.744 "superblock": true, 00:22:24.744 "num_base_bdevs": 3, 00:22:24.744 "num_base_bdevs_discovered": 1, 00:22:24.744 "num_base_bdevs_operational": 3, 00:22:24.744 "base_bdevs_list": [ 00:22:24.744 { 00:22:24.744 "name": "BaseBdev1", 00:22:24.744 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:24.744 "is_configured": true, 00:22:24.744 "data_offset": 2048, 00:22:24.744 "data_size": 63488 00:22:24.744 }, 00:22:24.744 { 00:22:24.744 "name": "BaseBdev2", 00:22:24.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.744 "is_configured": false, 00:22:24.744 "data_offset": 0, 00:22:24.744 "data_size": 0 00:22:24.744 }, 00:22:24.744 { 00:22:24.744 "name": "BaseBdev3", 00:22:24.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.744 "is_configured": false, 00:22:24.744 "data_offset": 0, 00:22:24.744 "data_size": 0 00:22:24.744 } 00:22:24.744 ] 00:22:24.744 }' 00:22:24.744 14:05:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.744 14:05:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:25.675 [2024-07-25 14:05:14.685228] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.675 BaseBdev2 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:25.675 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.242 14:05:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:26.500 [ 00:22:26.500 { 00:22:26.500 "name": "BaseBdev2", 00:22:26.500 "aliases": [ 00:22:26.500 "633320cb-b519-4dab-aa20-c3aa4f81514f" 00:22:26.500 ], 00:22:26.500 "product_name": "Malloc disk", 00:22:26.500 "block_size": 512, 00:22:26.500 "num_blocks": 65536, 00:22:26.500 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:26.500 "assigned_rate_limits": { 00:22:26.500 "rw_ios_per_sec": 0, 00:22:26.500 "rw_mbytes_per_sec": 0, 00:22:26.500 "r_mbytes_per_sec": 0, 00:22:26.500 "w_mbytes_per_sec": 0 00:22:26.500 }, 00:22:26.500 "claimed": true, 00:22:26.500 "claim_type": "exclusive_write", 00:22:26.500 "zoned": false, 00:22:26.500 "supported_io_types": { 00:22:26.500 "read": true, 00:22:26.500 "write": true, 00:22:26.500 "unmap": true, 00:22:26.500 "flush": true, 00:22:26.500 "reset": true, 00:22:26.500 "nvme_admin": false, 00:22:26.500 "nvme_io": false, 00:22:26.500 "nvme_io_md": false, 00:22:26.500 "write_zeroes": true, 00:22:26.500 "zcopy": true, 00:22:26.501 "get_zone_info": false, 00:22:26.501 "zone_management": false, 00:22:26.501 "zone_append": false, 00:22:26.501 "compare": false, 00:22:26.501 "compare_and_write": false, 00:22:26.501 "abort": true, 00:22:26.501 "seek_hole": false, 00:22:26.501 "seek_data": false, 00:22:26.501 "copy": true, 00:22:26.501 "nvme_iov_md": false 00:22:26.501 }, 00:22:26.501 "memory_domains": [ 00:22:26.501 { 00:22:26.501 "dma_device_id": "system", 00:22:26.501 "dma_device_type": 1 00:22:26.501 }, 00:22:26.501 { 00:22:26.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.501 "dma_device_type": 2 00:22:26.501 } 00:22:26.501 ], 00:22:26.501 "driver_specific": {} 00:22:26.501 } 00:22:26.501 ] 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.501 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.759 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.759 "name": "Existed_Raid", 00:22:26.759 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:26.759 "strip_size_kb": 0, 00:22:26.759 "state": "configuring", 00:22:26.759 "raid_level": "raid1", 00:22:26.759 "superblock": true, 00:22:26.759 "num_base_bdevs": 3, 00:22:26.759 "num_base_bdevs_discovered": 2, 00:22:26.759 "num_base_bdevs_operational": 3, 00:22:26.759 "base_bdevs_list": [ 00:22:26.759 { 00:22:26.759 "name": "BaseBdev1", 00:22:26.759 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:26.759 "is_configured": true, 00:22:26.759 "data_offset": 2048, 00:22:26.759 "data_size": 63488 00:22:26.759 }, 00:22:26.759 { 00:22:26.759 "name": "BaseBdev2", 00:22:26.759 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:26.759 "is_configured": true, 00:22:26.759 "data_offset": 2048, 00:22:26.759 "data_size": 63488 00:22:26.759 }, 00:22:26.759 { 00:22:26.759 "name": "BaseBdev3", 00:22:26.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.759 "is_configured": false, 00:22:26.759 "data_offset": 0, 00:22:26.759 "data_size": 0 00:22:26.759 } 00:22:26.759 ] 00:22:26.759 }' 00:22:26.759 14:05:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.759 14:05:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.324 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:27.582 [2024-07-25 14:05:16.452870] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:27.583 [2024-07-25 14:05:16.453536] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:22:27.583 [2024-07-25 14:05:16.453738] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:27.583 [2024-07-25 14:05:16.454161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:27.583 BaseBdev3 00:22:27.583 [2024-07-25 14:05:16.454848] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:22:27.583 [2024-07-25 14:05:16.454978] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:22:27.583 [2024-07-25 14:05:16.455277] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:27.583 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:27.840 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:28.098 [ 00:22:28.098 { 00:22:28.098 "name": "BaseBdev3", 00:22:28.098 "aliases": [ 00:22:28.098 "691b771c-f7ea-4896-a565-44d264e62318" 00:22:28.098 ], 00:22:28.098 "product_name": "Malloc disk", 00:22:28.098 "block_size": 512, 00:22:28.098 "num_blocks": 65536, 00:22:28.098 "uuid": "691b771c-f7ea-4896-a565-44d264e62318", 00:22:28.098 "assigned_rate_limits": { 00:22:28.098 "rw_ios_per_sec": 0, 00:22:28.098 "rw_mbytes_per_sec": 0, 00:22:28.098 "r_mbytes_per_sec": 0, 00:22:28.098 "w_mbytes_per_sec": 0 00:22:28.098 }, 00:22:28.098 "claimed": true, 00:22:28.098 "claim_type": "exclusive_write", 00:22:28.098 "zoned": false, 00:22:28.098 "supported_io_types": { 00:22:28.098 "read": true, 00:22:28.098 "write": true, 00:22:28.098 "unmap": true, 00:22:28.098 "flush": true, 00:22:28.098 "reset": true, 00:22:28.098 "nvme_admin": false, 00:22:28.098 "nvme_io": false, 00:22:28.098 "nvme_io_md": false, 00:22:28.098 "write_zeroes": true, 00:22:28.098 "zcopy": true, 00:22:28.098 "get_zone_info": false, 00:22:28.098 "zone_management": false, 00:22:28.098 "zone_append": false, 00:22:28.098 "compare": false, 00:22:28.098 "compare_and_write": false, 00:22:28.098 "abort": true, 00:22:28.098 "seek_hole": false, 00:22:28.098 "seek_data": false, 00:22:28.098 "copy": true, 00:22:28.098 "nvme_iov_md": false 00:22:28.098 }, 00:22:28.098 "memory_domains": [ 00:22:28.098 { 00:22:28.098 "dma_device_id": "system", 00:22:28.098 "dma_device_type": 1 00:22:28.098 }, 00:22:28.098 { 00:22:28.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.098 "dma_device_type": 2 00:22:28.098 } 00:22:28.098 ], 00:22:28.098 "driver_specific": {} 00:22:28.098 } 00:22:28.098 ] 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.098 14:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.356 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.356 "name": "Existed_Raid", 00:22:28.356 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:28.356 "strip_size_kb": 0, 00:22:28.356 "state": "online", 00:22:28.356 "raid_level": "raid1", 00:22:28.356 "superblock": true, 00:22:28.356 "num_base_bdevs": 3, 00:22:28.356 "num_base_bdevs_discovered": 3, 00:22:28.356 "num_base_bdevs_operational": 3, 00:22:28.356 "base_bdevs_list": [ 00:22:28.356 { 00:22:28.356 "name": "BaseBdev1", 00:22:28.356 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:28.356 "is_configured": true, 00:22:28.356 "data_offset": 2048, 00:22:28.356 "data_size": 63488 00:22:28.356 }, 00:22:28.356 { 00:22:28.356 "name": "BaseBdev2", 00:22:28.356 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:28.356 "is_configured": true, 00:22:28.356 "data_offset": 2048, 00:22:28.356 "data_size": 63488 00:22:28.356 }, 00:22:28.356 { 00:22:28.356 "name": "BaseBdev3", 00:22:28.356 "uuid": "691b771c-f7ea-4896-a565-44d264e62318", 00:22:28.356 "is_configured": true, 00:22:28.356 "data_offset": 2048, 00:22:28.356 "data_size": 63488 00:22:28.356 } 00:22:28.356 ] 00:22:28.356 }' 00:22:28.356 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.356 14:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:28.920 14:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:29.221 [2024-07-25 14:05:18.140108] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.221 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:29.221 "name": "Existed_Raid", 00:22:29.221 "aliases": [ 00:22:29.221 "375e698f-313f-4a90-92ca-e3ae4353c446" 00:22:29.221 ], 00:22:29.221 "product_name": "Raid Volume", 00:22:29.221 "block_size": 512, 00:22:29.221 "num_blocks": 63488, 00:22:29.221 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:29.221 "assigned_rate_limits": { 00:22:29.221 "rw_ios_per_sec": 0, 00:22:29.221 "rw_mbytes_per_sec": 0, 00:22:29.221 "r_mbytes_per_sec": 0, 00:22:29.221 "w_mbytes_per_sec": 0 00:22:29.221 }, 00:22:29.221 "claimed": false, 00:22:29.221 "zoned": false, 00:22:29.221 "supported_io_types": { 00:22:29.221 "read": true, 00:22:29.221 "write": true, 00:22:29.221 "unmap": false, 00:22:29.221 "flush": false, 00:22:29.221 "reset": true, 00:22:29.221 "nvme_admin": false, 00:22:29.221 "nvme_io": false, 00:22:29.221 "nvme_io_md": false, 00:22:29.221 "write_zeroes": true, 00:22:29.221 "zcopy": false, 00:22:29.221 "get_zone_info": false, 00:22:29.221 "zone_management": false, 00:22:29.221 "zone_append": false, 00:22:29.221 "compare": false, 00:22:29.221 "compare_and_write": false, 00:22:29.221 "abort": false, 00:22:29.221 "seek_hole": false, 00:22:29.221 "seek_data": false, 00:22:29.221 "copy": false, 00:22:29.221 "nvme_iov_md": false 00:22:29.221 }, 00:22:29.221 "memory_domains": [ 00:22:29.221 { 00:22:29.221 "dma_device_id": "system", 00:22:29.221 "dma_device_type": 1 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.221 "dma_device_type": 2 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "dma_device_id": "system", 00:22:29.221 "dma_device_type": 1 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.221 "dma_device_type": 2 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "dma_device_id": "system", 00:22:29.221 "dma_device_type": 1 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.221 "dma_device_type": 2 00:22:29.221 } 00:22:29.221 ], 00:22:29.221 "driver_specific": { 00:22:29.221 "raid": { 00:22:29.221 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:29.221 "strip_size_kb": 0, 00:22:29.221 "state": "online", 00:22:29.221 "raid_level": "raid1", 00:22:29.221 "superblock": true, 00:22:29.221 "num_base_bdevs": 3, 00:22:29.221 "num_base_bdevs_discovered": 3, 00:22:29.221 "num_base_bdevs_operational": 3, 00:22:29.221 "base_bdevs_list": [ 00:22:29.221 { 00:22:29.221 "name": "BaseBdev1", 00:22:29.221 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:29.221 "is_configured": true, 00:22:29.221 "data_offset": 2048, 00:22:29.221 "data_size": 63488 00:22:29.221 }, 00:22:29.221 { 00:22:29.221 "name": "BaseBdev2", 00:22:29.221 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:29.221 "is_configured": true, 00:22:29.221 "data_offset": 2048, 00:22:29.222 "data_size": 63488 00:22:29.222 }, 00:22:29.222 { 00:22:29.222 "name": "BaseBdev3", 00:22:29.222 "uuid": "691b771c-f7ea-4896-a565-44d264e62318", 00:22:29.222 "is_configured": true, 00:22:29.222 "data_offset": 2048, 00:22:29.222 "data_size": 63488 00:22:29.222 } 00:22:29.222 ] 00:22:29.222 } 00:22:29.222 } 00:22:29.222 }' 00:22:29.222 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:29.222 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:29.222 BaseBdev2 00:22:29.222 BaseBdev3' 00:22:29.222 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.222 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:29.222 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.480 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.480 "name": "BaseBdev1", 00:22:29.480 "aliases": [ 00:22:29.480 "26695c36-adf2-49fb-b63c-b9c057775106" 00:22:29.480 ], 00:22:29.480 "product_name": "Malloc disk", 00:22:29.480 "block_size": 512, 00:22:29.480 "num_blocks": 65536, 00:22:29.480 "uuid": "26695c36-adf2-49fb-b63c-b9c057775106", 00:22:29.480 "assigned_rate_limits": { 00:22:29.480 "rw_ios_per_sec": 0, 00:22:29.480 "rw_mbytes_per_sec": 0, 00:22:29.480 "r_mbytes_per_sec": 0, 00:22:29.480 "w_mbytes_per_sec": 0 00:22:29.480 }, 00:22:29.480 "claimed": true, 00:22:29.480 "claim_type": "exclusive_write", 00:22:29.480 "zoned": false, 00:22:29.480 "supported_io_types": { 00:22:29.480 "read": true, 00:22:29.480 "write": true, 00:22:29.480 "unmap": true, 00:22:29.480 "flush": true, 00:22:29.480 "reset": true, 00:22:29.480 "nvme_admin": false, 00:22:29.480 "nvme_io": false, 00:22:29.480 "nvme_io_md": false, 00:22:29.480 "write_zeroes": true, 00:22:29.480 "zcopy": true, 00:22:29.480 "get_zone_info": false, 00:22:29.480 "zone_management": false, 00:22:29.480 "zone_append": false, 00:22:29.480 "compare": false, 00:22:29.480 "compare_and_write": false, 00:22:29.480 "abort": true, 00:22:29.480 "seek_hole": false, 00:22:29.480 "seek_data": false, 00:22:29.480 "copy": true, 00:22:29.480 "nvme_iov_md": false 00:22:29.480 }, 00:22:29.480 "memory_domains": [ 00:22:29.480 { 00:22:29.480 "dma_device_id": "system", 00:22:29.480 "dma_device_type": 1 00:22:29.480 }, 00:22:29.480 { 00:22:29.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.480 "dma_device_type": 2 00:22:29.480 } 00:22:29.480 ], 00:22:29.480 "driver_specific": {} 00:22:29.480 }' 00:22:29.480 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.480 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:29.738 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.994 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.994 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:29.994 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.994 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:29.994 14:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.251 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.251 "name": "BaseBdev2", 00:22:30.251 "aliases": [ 00:22:30.251 "633320cb-b519-4dab-aa20-c3aa4f81514f" 00:22:30.251 ], 00:22:30.251 "product_name": "Malloc disk", 00:22:30.251 "block_size": 512, 00:22:30.251 "num_blocks": 65536, 00:22:30.251 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:30.251 "assigned_rate_limits": { 00:22:30.251 "rw_ios_per_sec": 0, 00:22:30.251 "rw_mbytes_per_sec": 0, 00:22:30.251 "r_mbytes_per_sec": 0, 00:22:30.251 "w_mbytes_per_sec": 0 00:22:30.251 }, 00:22:30.251 "claimed": true, 00:22:30.251 "claim_type": "exclusive_write", 00:22:30.251 "zoned": false, 00:22:30.251 "supported_io_types": { 00:22:30.251 "read": true, 00:22:30.251 "write": true, 00:22:30.251 "unmap": true, 00:22:30.251 "flush": true, 00:22:30.251 "reset": true, 00:22:30.251 "nvme_admin": false, 00:22:30.251 "nvme_io": false, 00:22:30.251 "nvme_io_md": false, 00:22:30.251 "write_zeroes": true, 00:22:30.251 "zcopy": true, 00:22:30.251 "get_zone_info": false, 00:22:30.251 "zone_management": false, 00:22:30.251 "zone_append": false, 00:22:30.251 "compare": false, 00:22:30.251 "compare_and_write": false, 00:22:30.251 "abort": true, 00:22:30.251 "seek_hole": false, 00:22:30.251 "seek_data": false, 00:22:30.251 "copy": true, 00:22:30.251 "nvme_iov_md": false 00:22:30.251 }, 00:22:30.251 "memory_domains": [ 00:22:30.251 { 00:22:30.251 "dma_device_id": "system", 00:22:30.251 "dma_device_type": 1 00:22:30.251 }, 00:22:30.251 { 00:22:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.251 "dma_device_type": 2 00:22:30.251 } 00:22:30.251 ], 00:22:30.251 "driver_specific": {} 00:22:30.251 }' 00:22:30.251 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.251 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.251 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:30.251 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.509 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.766 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:30.766 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:30.766 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:30.766 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:31.024 "name": "BaseBdev3", 00:22:31.024 "aliases": [ 00:22:31.024 "691b771c-f7ea-4896-a565-44d264e62318" 00:22:31.024 ], 00:22:31.024 "product_name": "Malloc disk", 00:22:31.024 "block_size": 512, 00:22:31.024 "num_blocks": 65536, 00:22:31.024 "uuid": "691b771c-f7ea-4896-a565-44d264e62318", 00:22:31.024 "assigned_rate_limits": { 00:22:31.024 "rw_ios_per_sec": 0, 00:22:31.024 "rw_mbytes_per_sec": 0, 00:22:31.024 "r_mbytes_per_sec": 0, 00:22:31.024 "w_mbytes_per_sec": 0 00:22:31.024 }, 00:22:31.024 "claimed": true, 00:22:31.024 "claim_type": "exclusive_write", 00:22:31.024 "zoned": false, 00:22:31.024 "supported_io_types": { 00:22:31.024 "read": true, 00:22:31.024 "write": true, 00:22:31.024 "unmap": true, 00:22:31.024 "flush": true, 00:22:31.024 "reset": true, 00:22:31.024 "nvme_admin": false, 00:22:31.024 "nvme_io": false, 00:22:31.024 "nvme_io_md": false, 00:22:31.024 "write_zeroes": true, 00:22:31.024 "zcopy": true, 00:22:31.024 "get_zone_info": false, 00:22:31.024 "zone_management": false, 00:22:31.024 "zone_append": false, 00:22:31.024 "compare": false, 00:22:31.024 "compare_and_write": false, 00:22:31.024 "abort": true, 00:22:31.024 "seek_hole": false, 00:22:31.024 "seek_data": false, 00:22:31.024 "copy": true, 00:22:31.024 "nvme_iov_md": false 00:22:31.024 }, 00:22:31.024 "memory_domains": [ 00:22:31.024 { 00:22:31.024 "dma_device_id": "system", 00:22:31.024 "dma_device_type": 1 00:22:31.024 }, 00:22:31.024 { 00:22:31.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.024 "dma_device_type": 2 00:22:31.024 } 00:22:31.024 ], 00:22:31.024 "driver_specific": {} 00:22:31.024 }' 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.024 14:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.024 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:31.024 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.281 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.281 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:31.282 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.282 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.282 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:31.282 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:31.539 [2024-07-25 14:05:20.472249] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.539 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.107 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.107 "name": "Existed_Raid", 00:22:32.107 "uuid": "375e698f-313f-4a90-92ca-e3ae4353c446", 00:22:32.107 "strip_size_kb": 0, 00:22:32.107 "state": "online", 00:22:32.107 "raid_level": "raid1", 00:22:32.107 "superblock": true, 00:22:32.107 "num_base_bdevs": 3, 00:22:32.107 "num_base_bdevs_discovered": 2, 00:22:32.107 "num_base_bdevs_operational": 2, 00:22:32.107 "base_bdevs_list": [ 00:22:32.107 { 00:22:32.107 "name": null, 00:22:32.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.107 "is_configured": false, 00:22:32.107 "data_offset": 2048, 00:22:32.107 "data_size": 63488 00:22:32.107 }, 00:22:32.107 { 00:22:32.107 "name": "BaseBdev2", 00:22:32.107 "uuid": "633320cb-b519-4dab-aa20-c3aa4f81514f", 00:22:32.107 "is_configured": true, 00:22:32.107 "data_offset": 2048, 00:22:32.107 "data_size": 63488 00:22:32.107 }, 00:22:32.107 { 00:22:32.107 "name": "BaseBdev3", 00:22:32.107 "uuid": "691b771c-f7ea-4896-a565-44d264e62318", 00:22:32.107 "is_configured": true, 00:22:32.107 "data_offset": 2048, 00:22:32.107 "data_size": 63488 00:22:32.107 } 00:22:32.107 ] 00:22:32.107 }' 00:22:32.107 14:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.107 14:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.672 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:32.672 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:32.672 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:32.672 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.930 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:32.930 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:32.930 14:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:33.186 [2024-07-25 14:05:21.989506] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:33.186 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:33.186 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:33.187 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.187 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:33.444 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:33.444 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:33.444 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:33.701 [2024-07-25 14:05:22.567478] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:33.701 [2024-07-25 14:05:22.567841] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.701 [2024-07-25 14:05:22.653632] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.701 [2024-07-25 14:05:22.654012] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.701 [2024-07-25 14:05:22.654183] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:22:33.701 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:33.701 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:33.701 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.701 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:33.958 14:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.215 BaseBdev2 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:34.476 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:34.739 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:34.996 [ 00:22:34.996 { 00:22:34.996 "name": "BaseBdev2", 00:22:34.996 "aliases": [ 00:22:34.996 "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b" 00:22:34.996 ], 00:22:34.996 "product_name": "Malloc disk", 00:22:34.996 "block_size": 512, 00:22:34.996 "num_blocks": 65536, 00:22:34.996 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:34.996 "assigned_rate_limits": { 00:22:34.996 "rw_ios_per_sec": 0, 00:22:34.996 "rw_mbytes_per_sec": 0, 00:22:34.996 "r_mbytes_per_sec": 0, 00:22:34.996 "w_mbytes_per_sec": 0 00:22:34.996 }, 00:22:34.996 "claimed": false, 00:22:34.996 "zoned": false, 00:22:34.996 "supported_io_types": { 00:22:34.996 "read": true, 00:22:34.996 "write": true, 00:22:34.996 "unmap": true, 00:22:34.996 "flush": true, 00:22:34.996 "reset": true, 00:22:34.996 "nvme_admin": false, 00:22:34.996 "nvme_io": false, 00:22:34.996 "nvme_io_md": false, 00:22:34.996 "write_zeroes": true, 00:22:34.996 "zcopy": true, 00:22:34.996 "get_zone_info": false, 00:22:34.996 "zone_management": false, 00:22:34.996 "zone_append": false, 00:22:34.996 "compare": false, 00:22:34.996 "compare_and_write": false, 00:22:34.996 "abort": true, 00:22:34.996 "seek_hole": false, 00:22:34.996 "seek_data": false, 00:22:34.996 "copy": true, 00:22:34.996 "nvme_iov_md": false 00:22:34.996 }, 00:22:34.996 "memory_domains": [ 00:22:34.996 { 00:22:34.996 "dma_device_id": "system", 00:22:34.996 "dma_device_type": 1 00:22:34.996 }, 00:22:34.996 { 00:22:34.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.996 "dma_device_type": 2 00:22:34.996 } 00:22:34.996 ], 00:22:34.996 "driver_specific": {} 00:22:34.996 } 00:22:34.996 ] 00:22:34.996 14:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:34.996 14:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:34.996 14:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:34.996 14:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:35.253 BaseBdev3 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:35.253 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:35.511 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:35.767 [ 00:22:35.767 { 00:22:35.767 "name": "BaseBdev3", 00:22:35.767 "aliases": [ 00:22:35.767 "2a422686-58d3-4f40-80a3-0fed833b4559" 00:22:35.767 ], 00:22:35.767 "product_name": "Malloc disk", 00:22:35.767 "block_size": 512, 00:22:35.767 "num_blocks": 65536, 00:22:35.767 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:35.767 "assigned_rate_limits": { 00:22:35.767 "rw_ios_per_sec": 0, 00:22:35.767 "rw_mbytes_per_sec": 0, 00:22:35.767 "r_mbytes_per_sec": 0, 00:22:35.767 "w_mbytes_per_sec": 0 00:22:35.767 }, 00:22:35.767 "claimed": false, 00:22:35.767 "zoned": false, 00:22:35.767 "supported_io_types": { 00:22:35.767 "read": true, 00:22:35.767 "write": true, 00:22:35.767 "unmap": true, 00:22:35.767 "flush": true, 00:22:35.767 "reset": true, 00:22:35.767 "nvme_admin": false, 00:22:35.767 "nvme_io": false, 00:22:35.767 "nvme_io_md": false, 00:22:35.767 "write_zeroes": true, 00:22:35.767 "zcopy": true, 00:22:35.767 "get_zone_info": false, 00:22:35.767 "zone_management": false, 00:22:35.767 "zone_append": false, 00:22:35.767 "compare": false, 00:22:35.767 "compare_and_write": false, 00:22:35.767 "abort": true, 00:22:35.767 "seek_hole": false, 00:22:35.767 "seek_data": false, 00:22:35.767 "copy": true, 00:22:35.767 "nvme_iov_md": false 00:22:35.767 }, 00:22:35.767 "memory_domains": [ 00:22:35.767 { 00:22:35.767 "dma_device_id": "system", 00:22:35.767 "dma_device_type": 1 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.767 "dma_device_type": 2 00:22:35.767 } 00:22:35.767 ], 00:22:35.767 "driver_specific": {} 00:22:35.767 } 00:22:35.767 ] 00:22:35.767 14:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:35.767 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:35.767 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:35.767 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:36.024 [2024-07-25 14:05:24.845566] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.024 [2024-07-25 14:05:24.845955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.024 [2024-07-25 14:05:24.846103] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.024 [2024-07-25 14:05:24.848289] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.024 14:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.281 14:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.281 "name": "Existed_Raid", 00:22:36.281 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:36.281 "strip_size_kb": 0, 00:22:36.281 "state": "configuring", 00:22:36.281 "raid_level": "raid1", 00:22:36.281 "superblock": true, 00:22:36.281 "num_base_bdevs": 3, 00:22:36.281 "num_base_bdevs_discovered": 2, 00:22:36.281 "num_base_bdevs_operational": 3, 00:22:36.281 "base_bdevs_list": [ 00:22:36.281 { 00:22:36.281 "name": "BaseBdev1", 00:22:36.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.281 "is_configured": false, 00:22:36.281 "data_offset": 0, 00:22:36.281 "data_size": 0 00:22:36.281 }, 00:22:36.281 { 00:22:36.281 "name": "BaseBdev2", 00:22:36.282 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:36.282 "is_configured": true, 00:22:36.282 "data_offset": 2048, 00:22:36.282 "data_size": 63488 00:22:36.282 }, 00:22:36.282 { 00:22:36.282 "name": "BaseBdev3", 00:22:36.282 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:36.282 "is_configured": true, 00:22:36.282 "data_offset": 2048, 00:22:36.282 "data_size": 63488 00:22:36.282 } 00:22:36.282 ] 00:22:36.282 }' 00:22:36.282 14:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.282 14:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.844 14:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:37.102 [2024-07-25 14:05:26.073883] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.102 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.420 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.420 "name": "Existed_Raid", 00:22:37.420 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:37.420 "strip_size_kb": 0, 00:22:37.420 "state": "configuring", 00:22:37.420 "raid_level": "raid1", 00:22:37.420 "superblock": true, 00:22:37.420 "num_base_bdevs": 3, 00:22:37.420 "num_base_bdevs_discovered": 1, 00:22:37.420 "num_base_bdevs_operational": 3, 00:22:37.420 "base_bdevs_list": [ 00:22:37.420 { 00:22:37.420 "name": "BaseBdev1", 00:22:37.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.420 "is_configured": false, 00:22:37.420 "data_offset": 0, 00:22:37.420 "data_size": 0 00:22:37.420 }, 00:22:37.420 { 00:22:37.420 "name": null, 00:22:37.420 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:37.420 "is_configured": false, 00:22:37.420 "data_offset": 2048, 00:22:37.420 "data_size": 63488 00:22:37.420 }, 00:22:37.420 { 00:22:37.420 "name": "BaseBdev3", 00:22:37.420 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:37.420 "is_configured": true, 00:22:37.420 "data_offset": 2048, 00:22:37.420 "data_size": 63488 00:22:37.421 } 00:22:37.421 ] 00:22:37.421 }' 00:22:37.421 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.421 14:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.987 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.987 14:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:38.245 14:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:38.245 14:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:38.504 [2024-07-25 14:05:27.485318] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.504 BaseBdev1 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:38.504 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.762 14:05:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:39.020 [ 00:22:39.020 { 00:22:39.020 "name": "BaseBdev1", 00:22:39.020 "aliases": [ 00:22:39.020 "627a2690-304c-4d4b-9a41-aa5371fb9895" 00:22:39.020 ], 00:22:39.020 "product_name": "Malloc disk", 00:22:39.020 "block_size": 512, 00:22:39.020 "num_blocks": 65536, 00:22:39.020 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:39.020 "assigned_rate_limits": { 00:22:39.020 "rw_ios_per_sec": 0, 00:22:39.020 "rw_mbytes_per_sec": 0, 00:22:39.021 "r_mbytes_per_sec": 0, 00:22:39.021 "w_mbytes_per_sec": 0 00:22:39.021 }, 00:22:39.021 "claimed": true, 00:22:39.021 "claim_type": "exclusive_write", 00:22:39.021 "zoned": false, 00:22:39.021 "supported_io_types": { 00:22:39.021 "read": true, 00:22:39.021 "write": true, 00:22:39.021 "unmap": true, 00:22:39.021 "flush": true, 00:22:39.021 "reset": true, 00:22:39.021 "nvme_admin": false, 00:22:39.021 "nvme_io": false, 00:22:39.021 "nvme_io_md": false, 00:22:39.021 "write_zeroes": true, 00:22:39.021 "zcopy": true, 00:22:39.021 "get_zone_info": false, 00:22:39.021 "zone_management": false, 00:22:39.021 "zone_append": false, 00:22:39.021 "compare": false, 00:22:39.021 "compare_and_write": false, 00:22:39.021 "abort": true, 00:22:39.021 "seek_hole": false, 00:22:39.021 "seek_data": false, 00:22:39.021 "copy": true, 00:22:39.021 "nvme_iov_md": false 00:22:39.021 }, 00:22:39.021 "memory_domains": [ 00:22:39.021 { 00:22:39.021 "dma_device_id": "system", 00:22:39.021 "dma_device_type": 1 00:22:39.021 }, 00:22:39.021 { 00:22:39.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.021 "dma_device_type": 2 00:22:39.021 } 00:22:39.021 ], 00:22:39.021 "driver_specific": {} 00:22:39.021 } 00:22:39.021 ] 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.021 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.279 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.279 "name": "Existed_Raid", 00:22:39.279 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:39.279 "strip_size_kb": 0, 00:22:39.279 "state": "configuring", 00:22:39.279 "raid_level": "raid1", 00:22:39.279 "superblock": true, 00:22:39.279 "num_base_bdevs": 3, 00:22:39.279 "num_base_bdevs_discovered": 2, 00:22:39.279 "num_base_bdevs_operational": 3, 00:22:39.279 "base_bdevs_list": [ 00:22:39.279 { 00:22:39.279 "name": "BaseBdev1", 00:22:39.279 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:39.279 "is_configured": true, 00:22:39.279 "data_offset": 2048, 00:22:39.279 "data_size": 63488 00:22:39.279 }, 00:22:39.279 { 00:22:39.279 "name": null, 00:22:39.279 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:39.279 "is_configured": false, 00:22:39.279 "data_offset": 2048, 00:22:39.279 "data_size": 63488 00:22:39.279 }, 00:22:39.279 { 00:22:39.279 "name": "BaseBdev3", 00:22:39.279 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:39.279 "is_configured": true, 00:22:39.279 "data_offset": 2048, 00:22:39.279 "data_size": 63488 00:22:39.279 } 00:22:39.279 ] 00:22:39.279 }' 00:22:39.279 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.279 14:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.845 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.845 14:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:40.410 [2024-07-25 14:05:29.369833] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.410 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.668 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.668 "name": "Existed_Raid", 00:22:40.668 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:40.668 "strip_size_kb": 0, 00:22:40.668 "state": "configuring", 00:22:40.668 "raid_level": "raid1", 00:22:40.668 "superblock": true, 00:22:40.668 "num_base_bdevs": 3, 00:22:40.668 "num_base_bdevs_discovered": 1, 00:22:40.668 "num_base_bdevs_operational": 3, 00:22:40.668 "base_bdevs_list": [ 00:22:40.668 { 00:22:40.668 "name": "BaseBdev1", 00:22:40.668 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:40.668 "is_configured": true, 00:22:40.668 "data_offset": 2048, 00:22:40.668 "data_size": 63488 00:22:40.668 }, 00:22:40.668 { 00:22:40.668 "name": null, 00:22:40.668 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:40.668 "is_configured": false, 00:22:40.668 "data_offset": 2048, 00:22:40.668 "data_size": 63488 00:22:40.668 }, 00:22:40.668 { 00:22:40.668 "name": null, 00:22:40.668 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:40.668 "is_configured": false, 00:22:40.668 "data_offset": 2048, 00:22:40.668 "data_size": 63488 00:22:40.668 } 00:22:40.668 ] 00:22:40.668 }' 00:22:40.668 14:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.668 14:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.601 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.601 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:41.858 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:41.858 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:41.859 [2024-07-25 14:05:30.878185] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.859 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.116 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.116 14:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.374 14:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.374 "name": "Existed_Raid", 00:22:42.374 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:42.374 "strip_size_kb": 0, 00:22:42.374 "state": "configuring", 00:22:42.374 "raid_level": "raid1", 00:22:42.374 "superblock": true, 00:22:42.374 "num_base_bdevs": 3, 00:22:42.374 "num_base_bdevs_discovered": 2, 00:22:42.374 "num_base_bdevs_operational": 3, 00:22:42.374 "base_bdevs_list": [ 00:22:42.374 { 00:22:42.374 "name": "BaseBdev1", 00:22:42.374 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:42.374 "is_configured": true, 00:22:42.374 "data_offset": 2048, 00:22:42.374 "data_size": 63488 00:22:42.374 }, 00:22:42.374 { 00:22:42.374 "name": null, 00:22:42.375 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:42.375 "is_configured": false, 00:22:42.375 "data_offset": 2048, 00:22:42.375 "data_size": 63488 00:22:42.375 }, 00:22:42.375 { 00:22:42.375 "name": "BaseBdev3", 00:22:42.375 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:42.375 "is_configured": true, 00:22:42.375 "data_offset": 2048, 00:22:42.375 "data_size": 63488 00:22:42.375 } 00:22:42.375 ] 00:22:42.375 }' 00:22:42.375 14:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.375 14:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.941 14:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.941 14:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:43.199 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:43.199 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:43.457 [2024-07-25 14:05:32.338705] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.457 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.027 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.027 "name": "Existed_Raid", 00:22:44.027 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:44.027 "strip_size_kb": 0, 00:22:44.027 "state": "configuring", 00:22:44.027 "raid_level": "raid1", 00:22:44.027 "superblock": true, 00:22:44.027 "num_base_bdevs": 3, 00:22:44.027 "num_base_bdevs_discovered": 1, 00:22:44.027 "num_base_bdevs_operational": 3, 00:22:44.027 "base_bdevs_list": [ 00:22:44.027 { 00:22:44.027 "name": null, 00:22:44.027 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:44.027 "is_configured": false, 00:22:44.027 "data_offset": 2048, 00:22:44.027 "data_size": 63488 00:22:44.027 }, 00:22:44.027 { 00:22:44.027 "name": null, 00:22:44.027 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:44.027 "is_configured": false, 00:22:44.027 "data_offset": 2048, 00:22:44.027 "data_size": 63488 00:22:44.027 }, 00:22:44.027 { 00:22:44.027 "name": "BaseBdev3", 00:22:44.027 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:44.027 "is_configured": true, 00:22:44.027 "data_offset": 2048, 00:22:44.027 "data_size": 63488 00:22:44.027 } 00:22:44.027 ] 00:22:44.027 }' 00:22:44.027 14:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.027 14:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.592 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.592 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:44.850 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:44.850 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:45.108 [2024-07-25 14:05:33.911666] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.108 14:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.365 14:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.365 "name": "Existed_Raid", 00:22:45.365 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:45.365 "strip_size_kb": 0, 00:22:45.365 "state": "configuring", 00:22:45.365 "raid_level": "raid1", 00:22:45.365 "superblock": true, 00:22:45.365 "num_base_bdevs": 3, 00:22:45.365 "num_base_bdevs_discovered": 2, 00:22:45.365 "num_base_bdevs_operational": 3, 00:22:45.365 "base_bdevs_list": [ 00:22:45.365 { 00:22:45.365 "name": null, 00:22:45.365 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:45.365 "is_configured": false, 00:22:45.365 "data_offset": 2048, 00:22:45.365 "data_size": 63488 00:22:45.365 }, 00:22:45.365 { 00:22:45.365 "name": "BaseBdev2", 00:22:45.365 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:45.365 "is_configured": true, 00:22:45.365 "data_offset": 2048, 00:22:45.366 "data_size": 63488 00:22:45.366 }, 00:22:45.366 { 00:22:45.366 "name": "BaseBdev3", 00:22:45.366 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:45.366 "is_configured": true, 00:22:45.366 "data_offset": 2048, 00:22:45.366 "data_size": 63488 00:22:45.366 } 00:22:45.366 ] 00:22:45.366 }' 00:22:45.366 14:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.366 14:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.933 14:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.933 14:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:46.191 14:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:46.191 14:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:46.191 14:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.449 14:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 627a2690-304c-4d4b-9a41-aa5371fb9895 00:22:46.706 [2024-07-25 14:05:35.679243] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:46.706 [2024-07-25 14:05:35.679728] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:22:46.706 [2024-07-25 14:05:35.679869] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:46.706 [2024-07-25 14:05:35.680030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:46.706 [2024-07-25 14:05:35.680530] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:22:46.706 NewBaseBdev 00:22:46.706 [2024-07-25 14:05:35.680691] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:22:46.706 [2024-07-25 14:05:35.680957] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.706 14:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:46.706 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:46.707 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:46.707 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:46.707 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:46.707 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:46.707 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.965 14:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:47.223 [ 00:22:47.223 { 00:22:47.223 "name": "NewBaseBdev", 00:22:47.223 "aliases": [ 00:22:47.223 "627a2690-304c-4d4b-9a41-aa5371fb9895" 00:22:47.223 ], 00:22:47.223 "product_name": "Malloc disk", 00:22:47.223 "block_size": 512, 00:22:47.223 "num_blocks": 65536, 00:22:47.223 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:47.223 "assigned_rate_limits": { 00:22:47.223 "rw_ios_per_sec": 0, 00:22:47.223 "rw_mbytes_per_sec": 0, 00:22:47.223 "r_mbytes_per_sec": 0, 00:22:47.223 "w_mbytes_per_sec": 0 00:22:47.223 }, 00:22:47.223 "claimed": true, 00:22:47.223 "claim_type": "exclusive_write", 00:22:47.223 "zoned": false, 00:22:47.223 "supported_io_types": { 00:22:47.223 "read": true, 00:22:47.223 "write": true, 00:22:47.223 "unmap": true, 00:22:47.223 "flush": true, 00:22:47.223 "reset": true, 00:22:47.223 "nvme_admin": false, 00:22:47.223 "nvme_io": false, 00:22:47.223 "nvme_io_md": false, 00:22:47.223 "write_zeroes": true, 00:22:47.223 "zcopy": true, 00:22:47.223 "get_zone_info": false, 00:22:47.223 "zone_management": false, 00:22:47.223 "zone_append": false, 00:22:47.223 "compare": false, 00:22:47.223 "compare_and_write": false, 00:22:47.223 "abort": true, 00:22:47.223 "seek_hole": false, 00:22:47.223 "seek_data": false, 00:22:47.223 "copy": true, 00:22:47.223 "nvme_iov_md": false 00:22:47.223 }, 00:22:47.223 "memory_domains": [ 00:22:47.223 { 00:22:47.223 "dma_device_id": "system", 00:22:47.223 "dma_device_type": 1 00:22:47.223 }, 00:22:47.223 { 00:22:47.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.223 "dma_device_type": 2 00:22:47.223 } 00:22:47.223 ], 00:22:47.223 "driver_specific": {} 00:22:47.223 } 00:22:47.223 ] 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.223 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.481 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.481 "name": "Existed_Raid", 00:22:47.481 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:47.481 "strip_size_kb": 0, 00:22:47.481 "state": "online", 00:22:47.481 "raid_level": "raid1", 00:22:47.481 "superblock": true, 00:22:47.481 "num_base_bdevs": 3, 00:22:47.481 "num_base_bdevs_discovered": 3, 00:22:47.481 "num_base_bdevs_operational": 3, 00:22:47.481 "base_bdevs_list": [ 00:22:47.481 { 00:22:47.481 "name": "NewBaseBdev", 00:22:47.481 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:47.481 "is_configured": true, 00:22:47.481 "data_offset": 2048, 00:22:47.481 "data_size": 63488 00:22:47.481 }, 00:22:47.481 { 00:22:47.481 "name": "BaseBdev2", 00:22:47.481 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:47.481 "is_configured": true, 00:22:47.481 "data_offset": 2048, 00:22:47.481 "data_size": 63488 00:22:47.481 }, 00:22:47.481 { 00:22:47.481 "name": "BaseBdev3", 00:22:47.481 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:47.481 "is_configured": true, 00:22:47.481 "data_offset": 2048, 00:22:47.481 "data_size": 63488 00:22:47.481 } 00:22:47.481 ] 00:22:47.481 }' 00:22:47.481 14:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.481 14:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:48.412 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:48.413 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:48.413 [2024-07-25 14:05:37.435959] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:48.670 "name": "Existed_Raid", 00:22:48.670 "aliases": [ 00:22:48.670 "a286afd8-16c9-4981-8f59-423e400bf02c" 00:22:48.670 ], 00:22:48.670 "product_name": "Raid Volume", 00:22:48.670 "block_size": 512, 00:22:48.670 "num_blocks": 63488, 00:22:48.670 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:48.670 "assigned_rate_limits": { 00:22:48.670 "rw_ios_per_sec": 0, 00:22:48.670 "rw_mbytes_per_sec": 0, 00:22:48.670 "r_mbytes_per_sec": 0, 00:22:48.670 "w_mbytes_per_sec": 0 00:22:48.670 }, 00:22:48.670 "claimed": false, 00:22:48.670 "zoned": false, 00:22:48.670 "supported_io_types": { 00:22:48.670 "read": true, 00:22:48.670 "write": true, 00:22:48.670 "unmap": false, 00:22:48.670 "flush": false, 00:22:48.670 "reset": true, 00:22:48.670 "nvme_admin": false, 00:22:48.670 "nvme_io": false, 00:22:48.670 "nvme_io_md": false, 00:22:48.670 "write_zeroes": true, 00:22:48.670 "zcopy": false, 00:22:48.670 "get_zone_info": false, 00:22:48.670 "zone_management": false, 00:22:48.670 "zone_append": false, 00:22:48.670 "compare": false, 00:22:48.670 "compare_and_write": false, 00:22:48.670 "abort": false, 00:22:48.670 "seek_hole": false, 00:22:48.670 "seek_data": false, 00:22:48.670 "copy": false, 00:22:48.670 "nvme_iov_md": false 00:22:48.670 }, 00:22:48.670 "memory_domains": [ 00:22:48.670 { 00:22:48.670 "dma_device_id": "system", 00:22:48.670 "dma_device_type": 1 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.670 "dma_device_type": 2 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "dma_device_id": "system", 00:22:48.670 "dma_device_type": 1 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.670 "dma_device_type": 2 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "dma_device_id": "system", 00:22:48.670 "dma_device_type": 1 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.670 "dma_device_type": 2 00:22:48.670 } 00:22:48.670 ], 00:22:48.670 "driver_specific": { 00:22:48.670 "raid": { 00:22:48.670 "uuid": "a286afd8-16c9-4981-8f59-423e400bf02c", 00:22:48.670 "strip_size_kb": 0, 00:22:48.670 "state": "online", 00:22:48.670 "raid_level": "raid1", 00:22:48.670 "superblock": true, 00:22:48.670 "num_base_bdevs": 3, 00:22:48.670 "num_base_bdevs_discovered": 3, 00:22:48.670 "num_base_bdevs_operational": 3, 00:22:48.670 "base_bdevs_list": [ 00:22:48.670 { 00:22:48.670 "name": "NewBaseBdev", 00:22:48.670 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:48.670 "is_configured": true, 00:22:48.670 "data_offset": 2048, 00:22:48.670 "data_size": 63488 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "name": "BaseBdev2", 00:22:48.670 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:48.670 "is_configured": true, 00:22:48.670 "data_offset": 2048, 00:22:48.670 "data_size": 63488 00:22:48.670 }, 00:22:48.670 { 00:22:48.670 "name": "BaseBdev3", 00:22:48.670 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:48.670 "is_configured": true, 00:22:48.670 "data_offset": 2048, 00:22:48.670 "data_size": 63488 00:22:48.670 } 00:22:48.670 ] 00:22:48.670 } 00:22:48.670 } 00:22:48.670 }' 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:48.670 BaseBdev2 00:22:48.670 BaseBdev3' 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:48.670 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:48.928 "name": "NewBaseBdev", 00:22:48.928 "aliases": [ 00:22:48.928 "627a2690-304c-4d4b-9a41-aa5371fb9895" 00:22:48.928 ], 00:22:48.928 "product_name": "Malloc disk", 00:22:48.928 "block_size": 512, 00:22:48.928 "num_blocks": 65536, 00:22:48.928 "uuid": "627a2690-304c-4d4b-9a41-aa5371fb9895", 00:22:48.928 "assigned_rate_limits": { 00:22:48.928 "rw_ios_per_sec": 0, 00:22:48.928 "rw_mbytes_per_sec": 0, 00:22:48.928 "r_mbytes_per_sec": 0, 00:22:48.928 "w_mbytes_per_sec": 0 00:22:48.928 }, 00:22:48.928 "claimed": true, 00:22:48.928 "claim_type": "exclusive_write", 00:22:48.928 "zoned": false, 00:22:48.928 "supported_io_types": { 00:22:48.928 "read": true, 00:22:48.928 "write": true, 00:22:48.928 "unmap": true, 00:22:48.928 "flush": true, 00:22:48.928 "reset": true, 00:22:48.928 "nvme_admin": false, 00:22:48.928 "nvme_io": false, 00:22:48.928 "nvme_io_md": false, 00:22:48.928 "write_zeroes": true, 00:22:48.928 "zcopy": true, 00:22:48.928 "get_zone_info": false, 00:22:48.928 "zone_management": false, 00:22:48.928 "zone_append": false, 00:22:48.928 "compare": false, 00:22:48.928 "compare_and_write": false, 00:22:48.928 "abort": true, 00:22:48.928 "seek_hole": false, 00:22:48.928 "seek_data": false, 00:22:48.928 "copy": true, 00:22:48.928 "nvme_iov_md": false 00:22:48.928 }, 00:22:48.928 "memory_domains": [ 00:22:48.928 { 00:22:48.928 "dma_device_id": "system", 00:22:48.928 "dma_device_type": 1 00:22:48.928 }, 00:22:48.928 { 00:22:48.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.928 "dma_device_type": 2 00:22:48.928 } 00:22:48.928 ], 00:22:48.928 "driver_specific": {} 00:22:48.928 }' 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:48.928 14:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:49.186 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:49.443 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:49.443 "name": "BaseBdev2", 00:22:49.443 "aliases": [ 00:22:49.443 "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b" 00:22:49.443 ], 00:22:49.443 "product_name": "Malloc disk", 00:22:49.443 "block_size": 512, 00:22:49.443 "num_blocks": 65536, 00:22:49.443 "uuid": "5d098a23-6fc5-4769-ae4d-db9fd3c87b8b", 00:22:49.443 "assigned_rate_limits": { 00:22:49.443 "rw_ios_per_sec": 0, 00:22:49.443 "rw_mbytes_per_sec": 0, 00:22:49.443 "r_mbytes_per_sec": 0, 00:22:49.443 "w_mbytes_per_sec": 0 00:22:49.443 }, 00:22:49.443 "claimed": true, 00:22:49.443 "claim_type": "exclusive_write", 00:22:49.443 "zoned": false, 00:22:49.443 "supported_io_types": { 00:22:49.443 "read": true, 00:22:49.443 "write": true, 00:22:49.443 "unmap": true, 00:22:49.443 "flush": true, 00:22:49.443 "reset": true, 00:22:49.443 "nvme_admin": false, 00:22:49.443 "nvme_io": false, 00:22:49.443 "nvme_io_md": false, 00:22:49.443 "write_zeroes": true, 00:22:49.443 "zcopy": true, 00:22:49.443 "get_zone_info": false, 00:22:49.443 "zone_management": false, 00:22:49.443 "zone_append": false, 00:22:49.443 "compare": false, 00:22:49.443 "compare_and_write": false, 00:22:49.443 "abort": true, 00:22:49.443 "seek_hole": false, 00:22:49.443 "seek_data": false, 00:22:49.443 "copy": true, 00:22:49.443 "nvme_iov_md": false 00:22:49.443 }, 00:22:49.443 "memory_domains": [ 00:22:49.443 { 00:22:49.443 "dma_device_id": "system", 00:22:49.443 "dma_device_type": 1 00:22:49.443 }, 00:22:49.443 { 00:22:49.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.443 "dma_device_type": 2 00:22:49.443 } 00:22:49.443 ], 00:22:49.443 "driver_specific": {} 00:22:49.443 }' 00:22:49.443 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.702 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:49.960 14:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:50.218 "name": "BaseBdev3", 00:22:50.218 "aliases": [ 00:22:50.218 "2a422686-58d3-4f40-80a3-0fed833b4559" 00:22:50.218 ], 00:22:50.218 "product_name": "Malloc disk", 00:22:50.218 "block_size": 512, 00:22:50.218 "num_blocks": 65536, 00:22:50.218 "uuid": "2a422686-58d3-4f40-80a3-0fed833b4559", 00:22:50.218 "assigned_rate_limits": { 00:22:50.218 "rw_ios_per_sec": 0, 00:22:50.218 "rw_mbytes_per_sec": 0, 00:22:50.218 "r_mbytes_per_sec": 0, 00:22:50.218 "w_mbytes_per_sec": 0 00:22:50.218 }, 00:22:50.218 "claimed": true, 00:22:50.218 "claim_type": "exclusive_write", 00:22:50.218 "zoned": false, 00:22:50.218 "supported_io_types": { 00:22:50.218 "read": true, 00:22:50.218 "write": true, 00:22:50.218 "unmap": true, 00:22:50.218 "flush": true, 00:22:50.218 "reset": true, 00:22:50.218 "nvme_admin": false, 00:22:50.218 "nvme_io": false, 00:22:50.218 "nvme_io_md": false, 00:22:50.218 "write_zeroes": true, 00:22:50.218 "zcopy": true, 00:22:50.218 "get_zone_info": false, 00:22:50.218 "zone_management": false, 00:22:50.218 "zone_append": false, 00:22:50.218 "compare": false, 00:22:50.218 "compare_and_write": false, 00:22:50.218 "abort": true, 00:22:50.218 "seek_hole": false, 00:22:50.218 "seek_data": false, 00:22:50.218 "copy": true, 00:22:50.218 "nvme_iov_md": false 00:22:50.218 }, 00:22:50.218 "memory_domains": [ 00:22:50.218 { 00:22:50.218 "dma_device_id": "system", 00:22:50.218 "dma_device_type": 1 00:22:50.218 }, 00:22:50.218 { 00:22:50.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.218 "dma_device_type": 2 00:22:50.218 } 00:22:50.218 ], 00:22:50.218 "driver_specific": {} 00:22:50.218 }' 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.218 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:50.476 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:50.734 [2024-07-25 14:05:39.748129] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:50.734 [2024-07-25 14:05:39.748437] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.734 [2024-07-25 14:05:39.748689] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.734 [2024-07-25 14:05:39.749155] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.734 [2024-07-25 14:05:39.749285] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 131930 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 131930 ']' 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 131930 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131930 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131930' 00:22:50.992 killing process with pid 131930 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 131930 00:22:50.992 14:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 131930 00:22:50.992 [2024-07-25 14:05:39.803630] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:51.249 [2024-07-25 14:05:40.056714] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:52.181 ************************************ 00:22:52.181 END TEST raid_state_function_test_sb 00:22:52.181 ************************************ 00:22:52.182 14:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:52.182 00:22:52.182 real 0m33.233s 00:22:52.182 user 1m1.772s 00:22:52.182 sys 0m3.834s 00:22:52.182 14:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.182 14:05:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 14:05:41 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:22:52.440 14:05:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:52.440 14:05:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.440 14:05:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 ************************************ 00:22:52.440 START TEST raid_superblock_test 00:22:52.440 ************************************ 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=132937 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 132937 /var/tmp/spdk-raid.sock 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 132937 ']' 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:52.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.440 14:05:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 [2024-07-25 14:05:41.359412] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:22:52.440 [2024-07-25 14:05:41.361003] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132937 ] 00:22:52.699 [2024-07-25 14:05:41.543797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.955 [2024-07-25 14:05:41.789823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.955 [2024-07-25 14:05:41.990222] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.519 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:53.776 malloc1 00:22:53.776 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:54.033 [2024-07-25 14:05:42.930533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:54.033 [2024-07-25 14:05:42.930880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.033 [2024-07-25 14:05:42.931081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:54.033 [2024-07-25 14:05:42.931238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.033 [2024-07-25 14:05:42.934259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.033 [2024-07-25 14:05:42.934454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:54.033 pt1 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:54.033 14:05:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:54.291 malloc2 00:22:54.291 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:54.854 [2024-07-25 14:05:43.604474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:54.854 [2024-07-25 14:05:43.604922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.854 [2024-07-25 14:05:43.605118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:54.854 [2024-07-25 14:05:43.605288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.854 [2024-07-25 14:05:43.608452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.854 [2024-07-25 14:05:43.608661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:54.854 pt2 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:54.854 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:55.112 malloc3 00:22:55.112 14:05:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:55.370 [2024-07-25 14:05:44.240991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:55.370 [2024-07-25 14:05:44.241358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.370 [2024-07-25 14:05:44.241553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:55.370 [2024-07-25 14:05:44.241709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.370 [2024-07-25 14:05:44.244522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.370 [2024-07-25 14:05:44.244724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:55.370 pt3 00:22:55.370 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:55.370 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:55.370 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:55.628 [2024-07-25 14:05:44.517229] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:55.628 [2024-07-25 14:05:44.519662] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.628 [2024-07-25 14:05:44.519898] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:55.629 [2024-07-25 14:05:44.520268] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:22:55.629 [2024-07-25 14:05:44.520404] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.629 [2024-07-25 14:05:44.520687] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:55.629 [2024-07-25 14:05:44.521297] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:22:55.629 [2024-07-25 14:05:44.521455] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:22:55.629 [2024-07-25 14:05:44.521828] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.629 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.886 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.886 "name": "raid_bdev1", 00:22:55.886 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:22:55.886 "strip_size_kb": 0, 00:22:55.886 "state": "online", 00:22:55.886 "raid_level": "raid1", 00:22:55.886 "superblock": true, 00:22:55.886 "num_base_bdevs": 3, 00:22:55.886 "num_base_bdevs_discovered": 3, 00:22:55.886 "num_base_bdevs_operational": 3, 00:22:55.886 "base_bdevs_list": [ 00:22:55.886 { 00:22:55.886 "name": "pt1", 00:22:55.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.886 "is_configured": true, 00:22:55.886 "data_offset": 2048, 00:22:55.886 "data_size": 63488 00:22:55.886 }, 00:22:55.886 { 00:22:55.887 "name": "pt2", 00:22:55.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.887 "is_configured": true, 00:22:55.887 "data_offset": 2048, 00:22:55.887 "data_size": 63488 00:22:55.887 }, 00:22:55.887 { 00:22:55.887 "name": "pt3", 00:22:55.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:55.887 "is_configured": true, 00:22:55.887 "data_offset": 2048, 00:22:55.887 "data_size": 63488 00:22:55.887 } 00:22:55.887 ] 00:22:55.887 }' 00:22:55.887 14:05:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.887 14:05:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:56.819 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:57.076 [2024-07-25 14:05:45.878492] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.076 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:57.076 "name": "raid_bdev1", 00:22:57.076 "aliases": [ 00:22:57.076 "c53a09e1-c32d-4f25-b987-c5538c427c32" 00:22:57.076 ], 00:22:57.076 "product_name": "Raid Volume", 00:22:57.076 "block_size": 512, 00:22:57.076 "num_blocks": 63488, 00:22:57.076 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:22:57.076 "assigned_rate_limits": { 00:22:57.076 "rw_ios_per_sec": 0, 00:22:57.076 "rw_mbytes_per_sec": 0, 00:22:57.076 "r_mbytes_per_sec": 0, 00:22:57.076 "w_mbytes_per_sec": 0 00:22:57.076 }, 00:22:57.076 "claimed": false, 00:22:57.076 "zoned": false, 00:22:57.076 "supported_io_types": { 00:22:57.076 "read": true, 00:22:57.076 "write": true, 00:22:57.076 "unmap": false, 00:22:57.076 "flush": false, 00:22:57.076 "reset": true, 00:22:57.076 "nvme_admin": false, 00:22:57.076 "nvme_io": false, 00:22:57.076 "nvme_io_md": false, 00:22:57.076 "write_zeroes": true, 00:22:57.076 "zcopy": false, 00:22:57.076 "get_zone_info": false, 00:22:57.076 "zone_management": false, 00:22:57.077 "zone_append": false, 00:22:57.077 "compare": false, 00:22:57.077 "compare_and_write": false, 00:22:57.077 "abort": false, 00:22:57.077 "seek_hole": false, 00:22:57.077 "seek_data": false, 00:22:57.077 "copy": false, 00:22:57.077 "nvme_iov_md": false 00:22:57.077 }, 00:22:57.077 "memory_domains": [ 00:22:57.077 { 00:22:57.077 "dma_device_id": "system", 00:22:57.077 "dma_device_type": 1 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.077 "dma_device_type": 2 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "dma_device_id": "system", 00:22:57.077 "dma_device_type": 1 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.077 "dma_device_type": 2 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "dma_device_id": "system", 00:22:57.077 "dma_device_type": 1 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.077 "dma_device_type": 2 00:22:57.077 } 00:22:57.077 ], 00:22:57.077 "driver_specific": { 00:22:57.077 "raid": { 00:22:57.077 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:22:57.077 "strip_size_kb": 0, 00:22:57.077 "state": "online", 00:22:57.077 "raid_level": "raid1", 00:22:57.077 "superblock": true, 00:22:57.077 "num_base_bdevs": 3, 00:22:57.077 "num_base_bdevs_discovered": 3, 00:22:57.077 "num_base_bdevs_operational": 3, 00:22:57.077 "base_bdevs_list": [ 00:22:57.077 { 00:22:57.077 "name": "pt1", 00:22:57.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.077 "is_configured": true, 00:22:57.077 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "name": "pt2", 00:22:57.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.077 "is_configured": true, 00:22:57.077 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "name": "pt3", 00:22:57.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.077 "is_configured": true, 00:22:57.077 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 } 00:22:57.077 ] 00:22:57.077 } 00:22:57.077 } 00:22:57.077 }' 00:22:57.077 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.077 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:57.077 pt2 00:22:57.077 pt3' 00:22:57.077 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.077 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:57.077 14:05:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:57.334 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:57.334 "name": "pt1", 00:22:57.334 "aliases": [ 00:22:57.334 "00000000-0000-0000-0000-000000000001" 00:22:57.334 ], 00:22:57.334 "product_name": "passthru", 00:22:57.334 "block_size": 512, 00:22:57.334 "num_blocks": 65536, 00:22:57.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.334 "assigned_rate_limits": { 00:22:57.334 "rw_ios_per_sec": 0, 00:22:57.334 "rw_mbytes_per_sec": 0, 00:22:57.334 "r_mbytes_per_sec": 0, 00:22:57.334 "w_mbytes_per_sec": 0 00:22:57.334 }, 00:22:57.334 "claimed": true, 00:22:57.334 "claim_type": "exclusive_write", 00:22:57.334 "zoned": false, 00:22:57.334 "supported_io_types": { 00:22:57.334 "read": true, 00:22:57.334 "write": true, 00:22:57.334 "unmap": true, 00:22:57.334 "flush": true, 00:22:57.334 "reset": true, 00:22:57.334 "nvme_admin": false, 00:22:57.334 "nvme_io": false, 00:22:57.334 "nvme_io_md": false, 00:22:57.334 "write_zeroes": true, 00:22:57.334 "zcopy": true, 00:22:57.334 "get_zone_info": false, 00:22:57.334 "zone_management": false, 00:22:57.334 "zone_append": false, 00:22:57.334 "compare": false, 00:22:57.334 "compare_and_write": false, 00:22:57.334 "abort": true, 00:22:57.334 "seek_hole": false, 00:22:57.334 "seek_data": false, 00:22:57.334 "copy": true, 00:22:57.334 "nvme_iov_md": false 00:22:57.334 }, 00:22:57.334 "memory_domains": [ 00:22:57.334 { 00:22:57.334 "dma_device_id": "system", 00:22:57.334 "dma_device_type": 1 00:22:57.334 }, 00:22:57.334 { 00:22:57.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.334 "dma_device_type": 2 00:22:57.334 } 00:22:57.334 ], 00:22:57.334 "driver_specific": { 00:22:57.334 "passthru": { 00:22:57.334 "name": "pt1", 00:22:57.334 "base_bdev_name": "malloc1" 00:22:57.334 } 00:22:57.334 } 00:22:57.334 }' 00:22:57.334 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.334 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:57.334 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:57.334 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.591 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.848 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.848 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:57.848 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:57.848 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.105 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.105 "name": "pt2", 00:22:58.105 "aliases": [ 00:22:58.105 "00000000-0000-0000-0000-000000000002" 00:22:58.105 ], 00:22:58.105 "product_name": "passthru", 00:22:58.105 "block_size": 512, 00:22:58.105 "num_blocks": 65536, 00:22:58.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.105 "assigned_rate_limits": { 00:22:58.105 "rw_ios_per_sec": 0, 00:22:58.105 "rw_mbytes_per_sec": 0, 00:22:58.105 "r_mbytes_per_sec": 0, 00:22:58.105 "w_mbytes_per_sec": 0 00:22:58.105 }, 00:22:58.105 "claimed": true, 00:22:58.105 "claim_type": "exclusive_write", 00:22:58.105 "zoned": false, 00:22:58.105 "supported_io_types": { 00:22:58.105 "read": true, 00:22:58.105 "write": true, 00:22:58.105 "unmap": true, 00:22:58.105 "flush": true, 00:22:58.105 "reset": true, 00:22:58.105 "nvme_admin": false, 00:22:58.105 "nvme_io": false, 00:22:58.105 "nvme_io_md": false, 00:22:58.105 "write_zeroes": true, 00:22:58.105 "zcopy": true, 00:22:58.105 "get_zone_info": false, 00:22:58.105 "zone_management": false, 00:22:58.105 "zone_append": false, 00:22:58.105 "compare": false, 00:22:58.105 "compare_and_write": false, 00:22:58.105 "abort": true, 00:22:58.105 "seek_hole": false, 00:22:58.105 "seek_data": false, 00:22:58.105 "copy": true, 00:22:58.105 "nvme_iov_md": false 00:22:58.105 }, 00:22:58.105 "memory_domains": [ 00:22:58.105 { 00:22:58.105 "dma_device_id": "system", 00:22:58.105 "dma_device_type": 1 00:22:58.105 }, 00:22:58.105 { 00:22:58.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.105 "dma_device_type": 2 00:22:58.105 } 00:22:58.105 ], 00:22:58.105 "driver_specific": { 00:22:58.105 "passthru": { 00:22:58.105 "name": "pt2", 00:22:58.105 "base_bdev_name": "malloc2" 00:22:58.105 } 00:22:58.105 } 00:22:58.105 }' 00:22:58.105 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.105 14:05:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.105 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.105 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.105 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.105 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.105 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:58.363 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:58.621 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:58.621 "name": "pt3", 00:22:58.621 "aliases": [ 00:22:58.621 "00000000-0000-0000-0000-000000000003" 00:22:58.621 ], 00:22:58.621 "product_name": "passthru", 00:22:58.621 "block_size": 512, 00:22:58.621 "num_blocks": 65536, 00:22:58.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.621 "assigned_rate_limits": { 00:22:58.621 "rw_ios_per_sec": 0, 00:22:58.621 "rw_mbytes_per_sec": 0, 00:22:58.621 "r_mbytes_per_sec": 0, 00:22:58.621 "w_mbytes_per_sec": 0 00:22:58.621 }, 00:22:58.621 "claimed": true, 00:22:58.621 "claim_type": "exclusive_write", 00:22:58.621 "zoned": false, 00:22:58.621 "supported_io_types": { 00:22:58.621 "read": true, 00:22:58.621 "write": true, 00:22:58.621 "unmap": true, 00:22:58.621 "flush": true, 00:22:58.621 "reset": true, 00:22:58.621 "nvme_admin": false, 00:22:58.621 "nvme_io": false, 00:22:58.621 "nvme_io_md": false, 00:22:58.621 "write_zeroes": true, 00:22:58.621 "zcopy": true, 00:22:58.621 "get_zone_info": false, 00:22:58.621 "zone_management": false, 00:22:58.621 "zone_append": false, 00:22:58.621 "compare": false, 00:22:58.621 "compare_and_write": false, 00:22:58.621 "abort": true, 00:22:58.621 "seek_hole": false, 00:22:58.621 "seek_data": false, 00:22:58.621 "copy": true, 00:22:58.621 "nvme_iov_md": false 00:22:58.621 }, 00:22:58.621 "memory_domains": [ 00:22:58.621 { 00:22:58.621 "dma_device_id": "system", 00:22:58.621 "dma_device_type": 1 00:22:58.621 }, 00:22:58.621 { 00:22:58.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.621 "dma_device_type": 2 00:22:58.621 } 00:22:58.621 ], 00:22:58.621 "driver_specific": { 00:22:58.621 "passthru": { 00:22:58.621 "name": "pt3", 00:22:58.621 "base_bdev_name": "malloc3" 00:22:58.621 } 00:22:58.621 } 00:22:58.621 }' 00:22:58.621 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:58.891 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:59.150 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:59.150 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.150 14:05:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:59.150 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:59.150 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:59.150 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:59.408 [2024-07-25 14:05:48.303280] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.408 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=c53a09e1-c32d-4f25-b987-c5538c427c32 00:22:59.408 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z c53a09e1-c32d-4f25-b987-c5538c427c32 ']' 00:22:59.408 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:59.667 [2024-07-25 14:05:48.607046] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.667 [2024-07-25 14:05:48.607312] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.667 [2024-07-25 14:05:48.607549] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.667 [2024-07-25 14:05:48.607774] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.667 [2024-07-25 14:05:48.607902] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:22:59.667 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.667 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:59.924 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:59.925 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:59.925 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:59.925 14:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:00.182 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:00.182 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:00.440 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:00.440 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:00.699 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:00.699 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:00.957 14:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:01.523 [2024-07-25 14:05:50.267374] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:01.523 [2024-07-25 14:05:50.269902] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:01.523 [2024-07-25 14:05:50.270108] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:01.523 [2024-07-25 14:05:50.270218] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:01.523 [2024-07-25 14:05:50.270469] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:01.523 [2024-07-25 14:05:50.270634] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:01.523 [2024-07-25 14:05:50.270798] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.523 [2024-07-25 14:05:50.270906] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:23:01.523 request: 00:23:01.523 { 00:23:01.523 "name": "raid_bdev1", 00:23:01.523 "raid_level": "raid1", 00:23:01.523 "base_bdevs": [ 00:23:01.523 "malloc1", 00:23:01.523 "malloc2", 00:23:01.523 "malloc3" 00:23:01.523 ], 00:23:01.523 "superblock": false, 00:23:01.523 "method": "bdev_raid_create", 00:23:01.523 "req_id": 1 00:23:01.523 } 00:23:01.523 Got JSON-RPC error response 00:23:01.523 response: 00:23:01.523 { 00:23:01.523 "code": -17, 00:23:01.523 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:01.523 } 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.523 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:01.780 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:01.780 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:01.780 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:02.037 [2024-07-25 14:05:50.871622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:02.037 [2024-07-25 14:05:50.871999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.037 [2024-07-25 14:05:50.872167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:02.038 [2024-07-25 14:05:50.872298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.038 [2024-07-25 14:05:50.875067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.038 [2024-07-25 14:05:50.875250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:02.038 [2024-07-25 14:05:50.875507] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:02.038 [2024-07-25 14:05:50.875678] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:02.038 pt1 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.038 14:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.295 14:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.295 "name": "raid_bdev1", 00:23:02.295 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:02.295 "strip_size_kb": 0, 00:23:02.295 "state": "configuring", 00:23:02.295 "raid_level": "raid1", 00:23:02.295 "superblock": true, 00:23:02.295 "num_base_bdevs": 3, 00:23:02.295 "num_base_bdevs_discovered": 1, 00:23:02.295 "num_base_bdevs_operational": 3, 00:23:02.295 "base_bdevs_list": [ 00:23:02.295 { 00:23:02.295 "name": "pt1", 00:23:02.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.295 "is_configured": true, 00:23:02.295 "data_offset": 2048, 00:23:02.295 "data_size": 63488 00:23:02.295 }, 00:23:02.295 { 00:23:02.295 "name": null, 00:23:02.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.295 "is_configured": false, 00:23:02.295 "data_offset": 2048, 00:23:02.295 "data_size": 63488 00:23:02.295 }, 00:23:02.295 { 00:23:02.295 "name": null, 00:23:02.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.295 "is_configured": false, 00:23:02.295 "data_offset": 2048, 00:23:02.295 "data_size": 63488 00:23:02.295 } 00:23:02.295 ] 00:23:02.295 }' 00:23:02.295 14:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.295 14:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.861 14:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:23:02.861 14:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:03.119 [2024-07-25 14:05:52.144346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:03.119 [2024-07-25 14:05:52.144784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.119 [2024-07-25 14:05:52.144878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:03.119 [2024-07-25 14:05:52.145082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.119 [2024-07-25 14:05:52.145723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.119 [2024-07-25 14:05:52.145936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:03.119 [2024-07-25 14:05:52.146187] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:03.119 [2024-07-25 14:05:52.146346] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:03.119 pt2 00:23:03.376 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:03.376 [2024-07-25 14:05:52.404460] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.635 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.892 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.893 "name": "raid_bdev1", 00:23:03.893 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:03.893 "strip_size_kb": 0, 00:23:03.893 "state": "configuring", 00:23:03.893 "raid_level": "raid1", 00:23:03.893 "superblock": true, 00:23:03.893 "num_base_bdevs": 3, 00:23:03.893 "num_base_bdevs_discovered": 1, 00:23:03.893 "num_base_bdevs_operational": 3, 00:23:03.893 "base_bdevs_list": [ 00:23:03.893 { 00:23:03.893 "name": "pt1", 00:23:03.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:03.893 "is_configured": true, 00:23:03.893 "data_offset": 2048, 00:23:03.893 "data_size": 63488 00:23:03.893 }, 00:23:03.893 { 00:23:03.893 "name": null, 00:23:03.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.893 "is_configured": false, 00:23:03.893 "data_offset": 2048, 00:23:03.893 "data_size": 63488 00:23:03.893 }, 00:23:03.893 { 00:23:03.893 "name": null, 00:23:03.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:03.893 "is_configured": false, 00:23:03.893 "data_offset": 2048, 00:23:03.893 "data_size": 63488 00:23:03.893 } 00:23:03.893 ] 00:23:03.893 }' 00:23:03.893 14:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.893 14:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.467 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:04.467 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:04.467 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:04.732 [2024-07-25 14:05:53.624688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:04.732 [2024-07-25 14:05:53.625070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.732 [2024-07-25 14:05:53.625151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:04.732 [2024-07-25 14:05:53.625455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.732 [2024-07-25 14:05:53.626164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.732 [2024-07-25 14:05:53.626344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:04.732 [2024-07-25 14:05:53.626584] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:04.732 [2024-07-25 14:05:53.626731] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:04.732 pt2 00:23:04.732 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:04.732 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:04.732 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:04.990 [2024-07-25 14:05:53.872803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:04.990 [2024-07-25 14:05:53.873252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.990 [2024-07-25 14:05:53.873430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:04.990 [2024-07-25 14:05:53.873605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.990 [2024-07-25 14:05:53.874530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.990 [2024-07-25 14:05:53.874734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:04.990 [2024-07-25 14:05:53.875053] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:04.990 [2024-07-25 14:05:53.875231] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:04.990 [2024-07-25 14:05:53.875597] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:23:04.990 [2024-07-25 14:05:53.875743] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:04.990 [2024-07-25 14:05:53.875967] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:04.990 [2024-07-25 14:05:53.876501] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:23:04.990 [2024-07-25 14:05:53.876639] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:23:04.990 [2024-07-25 14:05:53.876960] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.990 pt3 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.990 14:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.248 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.248 "name": "raid_bdev1", 00:23:05.248 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:05.248 "strip_size_kb": 0, 00:23:05.248 "state": "online", 00:23:05.248 "raid_level": "raid1", 00:23:05.248 "superblock": true, 00:23:05.248 "num_base_bdevs": 3, 00:23:05.248 "num_base_bdevs_discovered": 3, 00:23:05.248 "num_base_bdevs_operational": 3, 00:23:05.248 "base_bdevs_list": [ 00:23:05.248 { 00:23:05.248 "name": "pt1", 00:23:05.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "name": "pt2", 00:23:05.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "name": "pt3", 00:23:05.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 } 00:23:05.248 ] 00:23:05.248 }' 00:23:05.248 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.248 14:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:05.813 14:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:06.070 [2024-07-25 14:05:55.105550] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:06.329 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:06.329 "name": "raid_bdev1", 00:23:06.329 "aliases": [ 00:23:06.329 "c53a09e1-c32d-4f25-b987-c5538c427c32" 00:23:06.329 ], 00:23:06.329 "product_name": "Raid Volume", 00:23:06.329 "block_size": 512, 00:23:06.329 "num_blocks": 63488, 00:23:06.329 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:06.329 "assigned_rate_limits": { 00:23:06.329 "rw_ios_per_sec": 0, 00:23:06.329 "rw_mbytes_per_sec": 0, 00:23:06.329 "r_mbytes_per_sec": 0, 00:23:06.329 "w_mbytes_per_sec": 0 00:23:06.329 }, 00:23:06.329 "claimed": false, 00:23:06.329 "zoned": false, 00:23:06.329 "supported_io_types": { 00:23:06.329 "read": true, 00:23:06.329 "write": true, 00:23:06.329 "unmap": false, 00:23:06.329 "flush": false, 00:23:06.329 "reset": true, 00:23:06.329 "nvme_admin": false, 00:23:06.329 "nvme_io": false, 00:23:06.329 "nvme_io_md": false, 00:23:06.329 "write_zeroes": true, 00:23:06.330 "zcopy": false, 00:23:06.330 "get_zone_info": false, 00:23:06.330 "zone_management": false, 00:23:06.330 "zone_append": false, 00:23:06.330 "compare": false, 00:23:06.330 "compare_and_write": false, 00:23:06.330 "abort": false, 00:23:06.330 "seek_hole": false, 00:23:06.330 "seek_data": false, 00:23:06.330 "copy": false, 00:23:06.330 "nvme_iov_md": false 00:23:06.330 }, 00:23:06.330 "memory_domains": [ 00:23:06.330 { 00:23:06.330 "dma_device_id": "system", 00:23:06.330 "dma_device_type": 1 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.330 "dma_device_type": 2 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "dma_device_id": "system", 00:23:06.330 "dma_device_type": 1 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.330 "dma_device_type": 2 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "dma_device_id": "system", 00:23:06.330 "dma_device_type": 1 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.330 "dma_device_type": 2 00:23:06.330 } 00:23:06.330 ], 00:23:06.330 "driver_specific": { 00:23:06.330 "raid": { 00:23:06.330 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:06.330 "strip_size_kb": 0, 00:23:06.330 "state": "online", 00:23:06.330 "raid_level": "raid1", 00:23:06.330 "superblock": true, 00:23:06.330 "num_base_bdevs": 3, 00:23:06.330 "num_base_bdevs_discovered": 3, 00:23:06.330 "num_base_bdevs_operational": 3, 00:23:06.330 "base_bdevs_list": [ 00:23:06.330 { 00:23:06.330 "name": "pt1", 00:23:06.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:06.330 "is_configured": true, 00:23:06.330 "data_offset": 2048, 00:23:06.330 "data_size": 63488 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "name": "pt2", 00:23:06.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:06.330 "is_configured": true, 00:23:06.330 "data_offset": 2048, 00:23:06.330 "data_size": 63488 00:23:06.330 }, 00:23:06.330 { 00:23:06.330 "name": "pt3", 00:23:06.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:06.330 "is_configured": true, 00:23:06.330 "data_offset": 2048, 00:23:06.330 "data_size": 63488 00:23:06.330 } 00:23:06.330 ] 00:23:06.330 } 00:23:06.330 } 00:23:06.330 }' 00:23:06.330 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:06.330 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:06.330 pt2 00:23:06.330 pt3' 00:23:06.330 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:06.330 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:06.330 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:06.587 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:06.587 "name": "pt1", 00:23:06.587 "aliases": [ 00:23:06.587 "00000000-0000-0000-0000-000000000001" 00:23:06.587 ], 00:23:06.587 "product_name": "passthru", 00:23:06.587 "block_size": 512, 00:23:06.587 "num_blocks": 65536, 00:23:06.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:06.587 "assigned_rate_limits": { 00:23:06.587 "rw_ios_per_sec": 0, 00:23:06.587 "rw_mbytes_per_sec": 0, 00:23:06.587 "r_mbytes_per_sec": 0, 00:23:06.587 "w_mbytes_per_sec": 0 00:23:06.587 }, 00:23:06.587 "claimed": true, 00:23:06.587 "claim_type": "exclusive_write", 00:23:06.587 "zoned": false, 00:23:06.587 "supported_io_types": { 00:23:06.587 "read": true, 00:23:06.587 "write": true, 00:23:06.587 "unmap": true, 00:23:06.587 "flush": true, 00:23:06.587 "reset": true, 00:23:06.587 "nvme_admin": false, 00:23:06.587 "nvme_io": false, 00:23:06.587 "nvme_io_md": false, 00:23:06.587 "write_zeroes": true, 00:23:06.587 "zcopy": true, 00:23:06.587 "get_zone_info": false, 00:23:06.587 "zone_management": false, 00:23:06.587 "zone_append": false, 00:23:06.588 "compare": false, 00:23:06.588 "compare_and_write": false, 00:23:06.588 "abort": true, 00:23:06.588 "seek_hole": false, 00:23:06.588 "seek_data": false, 00:23:06.588 "copy": true, 00:23:06.588 "nvme_iov_md": false 00:23:06.588 }, 00:23:06.588 "memory_domains": [ 00:23:06.588 { 00:23:06.588 "dma_device_id": "system", 00:23:06.588 "dma_device_type": 1 00:23:06.588 }, 00:23:06.588 { 00:23:06.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.588 "dma_device_type": 2 00:23:06.588 } 00:23:06.588 ], 00:23:06.588 "driver_specific": { 00:23:06.588 "passthru": { 00:23:06.588 "name": "pt1", 00:23:06.588 "base_bdev_name": "malloc1" 00:23:06.588 } 00:23:06.588 } 00:23:06.588 }' 00:23:06.588 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:06.588 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:06.588 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:06.588 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:06.846 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:07.104 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:07.104 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:07.104 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:07.104 14:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:07.366 "name": "pt2", 00:23:07.366 "aliases": [ 00:23:07.366 "00000000-0000-0000-0000-000000000002" 00:23:07.366 ], 00:23:07.366 "product_name": "passthru", 00:23:07.366 "block_size": 512, 00:23:07.366 "num_blocks": 65536, 00:23:07.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.366 "assigned_rate_limits": { 00:23:07.366 "rw_ios_per_sec": 0, 00:23:07.366 "rw_mbytes_per_sec": 0, 00:23:07.366 "r_mbytes_per_sec": 0, 00:23:07.366 "w_mbytes_per_sec": 0 00:23:07.366 }, 00:23:07.366 "claimed": true, 00:23:07.366 "claim_type": "exclusive_write", 00:23:07.366 "zoned": false, 00:23:07.366 "supported_io_types": { 00:23:07.366 "read": true, 00:23:07.366 "write": true, 00:23:07.366 "unmap": true, 00:23:07.366 "flush": true, 00:23:07.366 "reset": true, 00:23:07.366 "nvme_admin": false, 00:23:07.366 "nvme_io": false, 00:23:07.366 "nvme_io_md": false, 00:23:07.366 "write_zeroes": true, 00:23:07.366 "zcopy": true, 00:23:07.366 "get_zone_info": false, 00:23:07.366 "zone_management": false, 00:23:07.366 "zone_append": false, 00:23:07.366 "compare": false, 00:23:07.366 "compare_and_write": false, 00:23:07.366 "abort": true, 00:23:07.366 "seek_hole": false, 00:23:07.366 "seek_data": false, 00:23:07.366 "copy": true, 00:23:07.366 "nvme_iov_md": false 00:23:07.366 }, 00:23:07.366 "memory_domains": [ 00:23:07.366 { 00:23:07.366 "dma_device_id": "system", 00:23:07.366 "dma_device_type": 1 00:23:07.366 }, 00:23:07.366 { 00:23:07.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.366 "dma_device_type": 2 00:23:07.366 } 00:23:07.366 ], 00:23:07.366 "driver_specific": { 00:23:07.366 "passthru": { 00:23:07.366 "name": "pt2", 00:23:07.366 "base_bdev_name": "malloc2" 00:23:07.366 } 00:23:07.366 } 00:23:07.366 }' 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:07.366 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:07.626 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:07.884 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:07.884 "name": "pt3", 00:23:07.884 "aliases": [ 00:23:07.884 "00000000-0000-0000-0000-000000000003" 00:23:07.884 ], 00:23:07.884 "product_name": "passthru", 00:23:07.884 "block_size": 512, 00:23:07.884 "num_blocks": 65536, 00:23:07.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.884 "assigned_rate_limits": { 00:23:07.884 "rw_ios_per_sec": 0, 00:23:07.884 "rw_mbytes_per_sec": 0, 00:23:07.884 "r_mbytes_per_sec": 0, 00:23:07.884 "w_mbytes_per_sec": 0 00:23:07.884 }, 00:23:07.884 "claimed": true, 00:23:07.884 "claim_type": "exclusive_write", 00:23:07.884 "zoned": false, 00:23:07.884 "supported_io_types": { 00:23:07.884 "read": true, 00:23:07.884 "write": true, 00:23:07.884 "unmap": true, 00:23:07.884 "flush": true, 00:23:07.884 "reset": true, 00:23:07.884 "nvme_admin": false, 00:23:07.884 "nvme_io": false, 00:23:07.884 "nvme_io_md": false, 00:23:07.884 "write_zeroes": true, 00:23:07.884 "zcopy": true, 00:23:07.884 "get_zone_info": false, 00:23:07.884 "zone_management": false, 00:23:07.884 "zone_append": false, 00:23:07.884 "compare": false, 00:23:07.884 "compare_and_write": false, 00:23:07.884 "abort": true, 00:23:07.884 "seek_hole": false, 00:23:07.884 "seek_data": false, 00:23:07.884 "copy": true, 00:23:07.884 "nvme_iov_md": false 00:23:07.884 }, 00:23:07.884 "memory_domains": [ 00:23:07.884 { 00:23:07.884 "dma_device_id": "system", 00:23:07.884 "dma_device_type": 1 00:23:07.884 }, 00:23:07.884 { 00:23:07.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.884 "dma_device_type": 2 00:23:07.884 } 00:23:07.884 ], 00:23:07.884 "driver_specific": { 00:23:07.884 "passthru": { 00:23:07.884 "name": "pt3", 00:23:07.884 "base_bdev_name": "malloc3" 00:23:07.884 } 00:23:07.884 } 00:23:07.884 }' 00:23:07.884 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:07.884 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.141 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.141 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.141 14:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.141 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.141 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.141 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.142 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.142 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.398 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.398 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.398 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:23:08.398 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:08.656 [2024-07-25 14:05:57.510087] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.656 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' c53a09e1-c32d-4f25-b987-c5538c427c32 '!=' c53a09e1-c32d-4f25-b987-c5538c427c32 ']' 00:23:08.656 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:23:08.656 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:08.656 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:08.656 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:08.913 [2024-07-25 14:05:57.765902] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.913 14:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.170 14:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.170 "name": "raid_bdev1", 00:23:09.170 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:09.170 "strip_size_kb": 0, 00:23:09.170 "state": "online", 00:23:09.170 "raid_level": "raid1", 00:23:09.170 "superblock": true, 00:23:09.170 "num_base_bdevs": 3, 00:23:09.170 "num_base_bdevs_discovered": 2, 00:23:09.170 "num_base_bdevs_operational": 2, 00:23:09.170 "base_bdevs_list": [ 00:23:09.170 { 00:23:09.170 "name": null, 00:23:09.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.170 "is_configured": false, 00:23:09.170 "data_offset": 2048, 00:23:09.170 "data_size": 63488 00:23:09.170 }, 00:23:09.170 { 00:23:09.170 "name": "pt2", 00:23:09.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.170 "is_configured": true, 00:23:09.170 "data_offset": 2048, 00:23:09.170 "data_size": 63488 00:23:09.170 }, 00:23:09.170 { 00:23:09.170 "name": "pt3", 00:23:09.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:09.171 "is_configured": true, 00:23:09.171 "data_offset": 2048, 00:23:09.171 "data_size": 63488 00:23:09.171 } 00:23:09.171 ] 00:23:09.171 }' 00:23:09.171 14:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.171 14:05:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.737 14:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:09.994 [2024-07-25 14:05:58.926079] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.994 [2024-07-25 14:05:58.926316] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.994 [2024-07-25 14:05:58.926534] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.994 [2024-07-25 14:05:58.926733] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.994 [2024-07-25 14:05:58.926852] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:23:09.994 14:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:23:09.994 14:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.314 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:23:10.314 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:23:10.314 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:10.314 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:10.314 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:10.571 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:10.571 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:10.571 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:11.134 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:11.134 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:23:11.134 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:23:11.134 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:23:11.134 14:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:11.392 [2024-07-25 14:06:00.214583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:11.392 [2024-07-25 14:06:00.214963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.392 [2024-07-25 14:06:00.215134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:11.392 [2024-07-25 14:06:00.215273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.392 [2024-07-25 14:06:00.218058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.392 [2024-07-25 14:06:00.218244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:11.392 [2024-07-25 14:06:00.218494] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:11.392 [2024-07-25 14:06:00.218668] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:11.392 pt2 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.392 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.393 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.393 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.393 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.650 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.650 "name": "raid_bdev1", 00:23:11.650 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:11.650 "strip_size_kb": 0, 00:23:11.650 "state": "configuring", 00:23:11.650 "raid_level": "raid1", 00:23:11.650 "superblock": true, 00:23:11.650 "num_base_bdevs": 3, 00:23:11.650 "num_base_bdevs_discovered": 1, 00:23:11.650 "num_base_bdevs_operational": 2, 00:23:11.650 "base_bdevs_list": [ 00:23:11.650 { 00:23:11.650 "name": null, 00:23:11.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.650 "is_configured": false, 00:23:11.650 "data_offset": 2048, 00:23:11.650 "data_size": 63488 00:23:11.651 }, 00:23:11.651 { 00:23:11.651 "name": "pt2", 00:23:11.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.651 "is_configured": true, 00:23:11.651 "data_offset": 2048, 00:23:11.651 "data_size": 63488 00:23:11.651 }, 00:23:11.651 { 00:23:11.651 "name": null, 00:23:11.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:11.651 "is_configured": false, 00:23:11.651 "data_offset": 2048, 00:23:11.651 "data_size": 63488 00:23:11.651 } 00:23:11.651 ] 00:23:11.651 }' 00:23:11.651 14:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.651 14:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:12.585 [2024-07-25 14:06:01.518881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:12.585 [2024-07-25 14:06:01.519275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.585 [2024-07-25 14:06:01.519463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:12.585 [2024-07-25 14:06:01.519627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.585 [2024-07-25 14:06:01.520298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.585 [2024-07-25 14:06:01.520485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:12.585 [2024-07-25 14:06:01.520716] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:12.585 [2024-07-25 14:06:01.520863] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:12.585 [2024-07-25 14:06:01.521109] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:23:12.585 [2024-07-25 14:06:01.521238] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:12.585 [2024-07-25 14:06:01.521398] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:12.585 [2024-07-25 14:06:01.521903] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:23:12.585 [2024-07-25 14:06:01.522046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:23:12.585 [2024-07-25 14:06:01.522304] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.585 pt3 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.585 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.843 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.843 "name": "raid_bdev1", 00:23:12.843 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:12.843 "strip_size_kb": 0, 00:23:12.843 "state": "online", 00:23:12.843 "raid_level": "raid1", 00:23:12.843 "superblock": true, 00:23:12.843 "num_base_bdevs": 3, 00:23:12.843 "num_base_bdevs_discovered": 2, 00:23:12.843 "num_base_bdevs_operational": 2, 00:23:12.843 "base_bdevs_list": [ 00:23:12.843 { 00:23:12.843 "name": null, 00:23:12.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.843 "is_configured": false, 00:23:12.843 "data_offset": 2048, 00:23:12.843 "data_size": 63488 00:23:12.843 }, 00:23:12.843 { 00:23:12.843 "name": "pt2", 00:23:12.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.843 "is_configured": true, 00:23:12.843 "data_offset": 2048, 00:23:12.843 "data_size": 63488 00:23:12.843 }, 00:23:12.843 { 00:23:12.843 "name": "pt3", 00:23:12.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.843 "is_configured": true, 00:23:12.843 "data_offset": 2048, 00:23:12.843 "data_size": 63488 00:23:12.843 } 00:23:12.843 ] 00:23:12.843 }' 00:23:12.843 14:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.843 14:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.779 14:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:13.779 [2024-07-25 14:06:02.727074] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:13.779 [2024-07-25 14:06:02.727384] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.779 [2024-07-25 14:06:02.727572] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.779 [2024-07-25 14:06:02.727759] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.779 [2024-07-25 14:06:02.727878] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:23:13.779 14:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.779 14:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:23:14.037 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:23:14.037 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:23:14.037 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:23:14.037 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:23:14.037 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:14.294 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:14.550 [2024-07-25 14:06:03.567274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:14.550 [2024-07-25 14:06:03.567746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.550 [2024-07-25 14:06:03.567928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:14.550 [2024-07-25 14:06:03.568074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.550 [2024-07-25 14:06:03.570794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.550 [2024-07-25 14:06:03.570988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:14.550 [2024-07-25 14:06:03.571242] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:14.550 [2024-07-25 14:06:03.571420] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:14.550 [2024-07-25 14:06:03.571848] bdev_raid.c:3743:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:14.550 [2024-07-25 14:06:03.571987] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.550 [2024-07-25 14:06:03.572049] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:23:14.550 [2024-07-25 14:06:03.572318] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.550 pt1 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.550 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.808 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.808 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.808 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.808 "name": "raid_bdev1", 00:23:14.808 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:14.808 "strip_size_kb": 0, 00:23:14.808 "state": "configuring", 00:23:14.808 "raid_level": "raid1", 00:23:14.808 "superblock": true, 00:23:14.808 "num_base_bdevs": 3, 00:23:14.808 "num_base_bdevs_discovered": 1, 00:23:14.808 "num_base_bdevs_operational": 2, 00:23:14.808 "base_bdevs_list": [ 00:23:14.808 { 00:23:14.808 "name": null, 00:23:14.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.808 "is_configured": false, 00:23:14.808 "data_offset": 2048, 00:23:14.808 "data_size": 63488 00:23:14.808 }, 00:23:14.808 { 00:23:14.808 "name": "pt2", 00:23:14.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.808 "is_configured": true, 00:23:14.808 "data_offset": 2048, 00:23:14.808 "data_size": 63488 00:23:14.808 }, 00:23:14.808 { 00:23:14.808 "name": null, 00:23:14.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:14.808 "is_configured": false, 00:23:14.808 "data_offset": 2048, 00:23:14.808 "data_size": 63488 00:23:14.808 } 00:23:14.808 ] 00:23:14.808 }' 00:23:14.808 14:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.808 14:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.741 14:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:15.741 14:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:15.741 14:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:23:15.741 14:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:15.999 [2024-07-25 14:06:05.039641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:15.999 [2024-07-25 14:06:05.040049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.999 [2024-07-25 14:06:05.040218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:15.999 [2024-07-25 14:06:05.040359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.999 [2024-07-25 14:06:05.041089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.999 [2024-07-25 14:06:05.041274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:15.999 [2024-07-25 14:06:05.041512] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:15.999 [2024-07-25 14:06:05.041659] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:16.257 [2024-07-25 14:06:05.041954] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:23:16.257 [2024-07-25 14:06:05.042086] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:16.257 [2024-07-25 14:06:05.042257] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:23:16.257 [2024-07-25 14:06:05.042741] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:23:16.257 [2024-07-25 14:06:05.042871] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:23:16.257 [2024-07-25 14:06:05.043129] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.257 pt3 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.257 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.514 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.514 "name": "raid_bdev1", 00:23:16.514 "uuid": "c53a09e1-c32d-4f25-b987-c5538c427c32", 00:23:16.514 "strip_size_kb": 0, 00:23:16.514 "state": "online", 00:23:16.514 "raid_level": "raid1", 00:23:16.514 "superblock": true, 00:23:16.514 "num_base_bdevs": 3, 00:23:16.514 "num_base_bdevs_discovered": 2, 00:23:16.514 "num_base_bdevs_operational": 2, 00:23:16.514 "base_bdevs_list": [ 00:23:16.514 { 00:23:16.514 "name": null, 00:23:16.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.514 "is_configured": false, 00:23:16.514 "data_offset": 2048, 00:23:16.514 "data_size": 63488 00:23:16.515 }, 00:23:16.515 { 00:23:16.515 "name": "pt2", 00:23:16.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.515 "is_configured": true, 00:23:16.515 "data_offset": 2048, 00:23:16.515 "data_size": 63488 00:23:16.515 }, 00:23:16.515 { 00:23:16.515 "name": "pt3", 00:23:16.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:16.515 "is_configured": true, 00:23:16.515 "data_offset": 2048, 00:23:16.515 "data_size": 63488 00:23:16.515 } 00:23:16.515 ] 00:23:16.515 }' 00:23:16.515 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.515 14:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.080 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:17.080 14:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:17.343 14:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:23:17.343 14:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:23:17.343 14:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.630 [2024-07-25 14:06:06.548210] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' c53a09e1-c32d-4f25-b987-c5538c427c32 '!=' c53a09e1-c32d-4f25-b987-c5538c427c32 ']' 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 132937 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 132937 ']' 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 132937 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132937 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132937' 00:23:17.630 killing process with pid 132937 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 132937 00:23:17.630 14:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 132937 00:23:17.630 [2024-07-25 14:06:06.594510] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.630 [2024-07-25 14:06:06.594608] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.630 [2024-07-25 14:06:06.594685] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.630 [2024-07-25 14:06:06.594895] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:23:17.888 [2024-07-25 14:06:06.850268] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:19.263 ************************************ 00:23:19.263 END TEST raid_superblock_test 00:23:19.263 ************************************ 00:23:19.263 14:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:23:19.263 00:23:19.263 real 0m26.748s 00:23:19.263 user 0m49.553s 00:23:19.263 sys 0m3.116s 00:23:19.263 14:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.263 14:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 14:06:08 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:19.263 14:06:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:19.263 14:06:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.263 14:06:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 ************************************ 00:23:19.263 START TEST raid_read_error_test 00:23:19.263 ************************************ 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.S7Striyqhp 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=133730 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 133730 /var/tmp/spdk-raid.sock 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 133730 ']' 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.263 14:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.263 [2024-07-25 14:06:08.139723] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:23:19.263 [2024-07-25 14:06:08.139945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133730 ] 00:23:19.522 [2024-07-25 14:06:08.312417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.522 [2024-07-25 14:06:08.534628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.780 [2024-07-25 14:06:08.736515] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.344 14:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.344 14:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:20.344 14:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:20.344 14:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:20.602 BaseBdev1_malloc 00:23:20.602 14:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:20.861 true 00:23:20.861 14:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:21.120 [2024-07-25 14:06:09.995816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:21.120 [2024-07-25 14:06:09.995960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.120 [2024-07-25 14:06:09.996011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:21.120 [2024-07-25 14:06:09.996038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.120 [2024-07-25 14:06:09.998761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.120 [2024-07-25 14:06:09.998823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:21.120 BaseBdev1 00:23:21.120 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:21.120 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:21.378 BaseBdev2_malloc 00:23:21.378 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:21.636 true 00:23:21.636 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:21.894 [2024-07-25 14:06:10.832966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:21.894 [2024-07-25 14:06:10.833142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.894 [2024-07-25 14:06:10.833201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:21.894 [2024-07-25 14:06:10.833228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.894 [2024-07-25 14:06:10.835881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.894 [2024-07-25 14:06:10.835940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:21.894 BaseBdev2 00:23:21.894 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:21.894 14:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:22.151 BaseBdev3_malloc 00:23:22.151 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:22.408 true 00:23:22.408 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:22.666 [2024-07-25 14:06:11.652681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:22.666 [2024-07-25 14:06:11.652856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.666 [2024-07-25 14:06:11.652921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:22.666 [2024-07-25 14:06:11.652954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.666 [2024-07-25 14:06:11.655741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.666 [2024-07-25 14:06:11.655811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:22.666 BaseBdev3 00:23:22.666 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:22.924 [2024-07-25 14:06:11.904809] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.924 [2024-07-25 14:06:11.907133] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.924 [2024-07-25 14:06:11.907238] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:22.924 [2024-07-25 14:06:11.907560] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:23:22.924 [2024-07-25 14:06:11.907584] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:22.924 [2024-07-25 14:06:11.907736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:22.924 [2024-07-25 14:06:11.908196] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:23:22.924 [2024-07-25 14:06:11.908220] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:23:22.924 [2024-07-25 14:06:11.908466] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.924 14:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.181 14:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.181 "name": "raid_bdev1", 00:23:23.181 "uuid": "f893933f-5592-416f-bc70-39e29312b474", 00:23:23.181 "strip_size_kb": 0, 00:23:23.181 "state": "online", 00:23:23.181 "raid_level": "raid1", 00:23:23.181 "superblock": true, 00:23:23.181 "num_base_bdevs": 3, 00:23:23.181 "num_base_bdevs_discovered": 3, 00:23:23.181 "num_base_bdevs_operational": 3, 00:23:23.181 "base_bdevs_list": [ 00:23:23.181 { 00:23:23.181 "name": "BaseBdev1", 00:23:23.181 "uuid": "b1402650-01c8-552e-af37-dd25b479daae", 00:23:23.181 "is_configured": true, 00:23:23.181 "data_offset": 2048, 00:23:23.181 "data_size": 63488 00:23:23.181 }, 00:23:23.181 { 00:23:23.181 "name": "BaseBdev2", 00:23:23.182 "uuid": "69c14e2c-b8c4-5baa-ad56-455b6a25ade1", 00:23:23.182 "is_configured": true, 00:23:23.182 "data_offset": 2048, 00:23:23.182 "data_size": 63488 00:23:23.182 }, 00:23:23.182 { 00:23:23.182 "name": "BaseBdev3", 00:23:23.182 "uuid": "f19c9149-4a25-554c-8b9f-e00ef825b185", 00:23:23.182 "is_configured": true, 00:23:23.182 "data_offset": 2048, 00:23:23.182 "data_size": 63488 00:23:23.182 } 00:23:23.182 ] 00:23:23.182 }' 00:23:23.182 14:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.182 14:06:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.116 14:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:23:24.116 14:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:24.116 [2024-07-25 14:06:12.986378] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:25.049 14:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ read = \w\r\i\t\e ]] 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=3 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.307 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.564 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.564 "name": "raid_bdev1", 00:23:25.564 "uuid": "f893933f-5592-416f-bc70-39e29312b474", 00:23:25.564 "strip_size_kb": 0, 00:23:25.564 "state": "online", 00:23:25.564 "raid_level": "raid1", 00:23:25.564 "superblock": true, 00:23:25.564 "num_base_bdevs": 3, 00:23:25.564 "num_base_bdevs_discovered": 3, 00:23:25.564 "num_base_bdevs_operational": 3, 00:23:25.564 "base_bdevs_list": [ 00:23:25.564 { 00:23:25.564 "name": "BaseBdev1", 00:23:25.564 "uuid": "b1402650-01c8-552e-af37-dd25b479daae", 00:23:25.564 "is_configured": true, 00:23:25.564 "data_offset": 2048, 00:23:25.564 "data_size": 63488 00:23:25.564 }, 00:23:25.564 { 00:23:25.564 "name": "BaseBdev2", 00:23:25.564 "uuid": "69c14e2c-b8c4-5baa-ad56-455b6a25ade1", 00:23:25.564 "is_configured": true, 00:23:25.564 "data_offset": 2048, 00:23:25.564 "data_size": 63488 00:23:25.564 }, 00:23:25.564 { 00:23:25.564 "name": "BaseBdev3", 00:23:25.564 "uuid": "f19c9149-4a25-554c-8b9f-e00ef825b185", 00:23:25.564 "is_configured": true, 00:23:25.564 "data_offset": 2048, 00:23:25.564 "data_size": 63488 00:23:25.564 } 00:23:25.564 ] 00:23:25.564 }' 00:23:25.564 14:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.564 14:06:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.128 14:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:26.386 [2024-07-25 14:06:15.373758] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.386 [2024-07-25 14:06:15.373871] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.386 [2024-07-25 14:06:15.377090] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.386 [2024-07-25 14:06:15.377149] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.386 [2024-07-25 14:06:15.377267] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.386 [2024-07-25 14:06:15.377283] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:23:26.386 0 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 133730 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 133730 ']' 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 133730 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133730 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133730' 00:23:26.386 killing process with pid 133730 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 133730 00:23:26.386 14:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 133730 00:23:26.386 [2024-07-25 14:06:15.422106] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.644 [2024-07-25 14:06:15.615368] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.S7Striyqhp 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:28.017 00:23:28.017 real 0m8.748s 00:23:28.017 user 0m13.577s 00:23:28.017 sys 0m1.027s 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.017 14:06:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.017 ************************************ 00:23:28.017 END TEST raid_read_error_test 00:23:28.017 ************************************ 00:23:28.017 14:06:16 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:23:28.017 14:06:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:28.017 14:06:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.017 14:06:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:28.017 ************************************ 00:23:28.017 START TEST raid_write_error_test 00:23:28.017 ************************************ 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=3 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.jSY9zlqjWe 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=133937 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 133937 /var/tmp/spdk-raid.sock 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 133937 ']' 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:28.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.017 14:06:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.017 [2024-07-25 14:06:16.937313] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:23:28.017 [2024-07-25 14:06:16.937538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133937 ] 00:23:28.275 [2024-07-25 14:06:17.098474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.275 [2024-07-25 14:06:17.314992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.533 [2024-07-25 14:06:17.510556] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:29.099 14:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.099 14:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:29.099 14:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:29.099 14:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:29.405 BaseBdev1_malloc 00:23:29.405 14:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:29.663 true 00:23:29.663 14:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:29.920 [2024-07-25 14:06:18.741602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:29.920 [2024-07-25 14:06:18.741856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.920 [2024-07-25 14:06:18.741949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:29.920 [2024-07-25 14:06:18.741991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.920 [2024-07-25 14:06:18.744899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.920 [2024-07-25 14:06:18.744972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:29.920 BaseBdev1 00:23:29.920 14:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:29.920 14:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:30.178 BaseBdev2_malloc 00:23:30.178 14:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:30.435 true 00:23:30.435 14:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:30.692 [2024-07-25 14:06:19.558261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:30.692 [2024-07-25 14:06:19.558434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.692 [2024-07-25 14:06:19.558500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:30.692 [2024-07-25 14:06:19.558529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.692 [2024-07-25 14:06:19.561230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.692 [2024-07-25 14:06:19.561288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:30.692 BaseBdev2 00:23:30.692 14:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:23:30.692 14:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:30.949 BaseBdev3_malloc 00:23:30.949 14:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:31.206 true 00:23:31.206 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:31.464 [2024-07-25 14:06:20.384904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:31.464 [2024-07-25 14:06:20.385060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.464 [2024-07-25 14:06:20.385108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:31.464 [2024-07-25 14:06:20.385142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.464 [2024-07-25 14:06:20.387888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.464 [2024-07-25 14:06:20.387965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:31.464 BaseBdev3 00:23:31.464 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:23:31.721 [2024-07-25 14:06:20.625099] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.721 [2024-07-25 14:06:20.627566] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.721 [2024-07-25 14:06:20.627694] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:31.721 [2024-07-25 14:06:20.627995] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:23:31.721 [2024-07-25 14:06:20.628023] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:31.721 [2024-07-25 14:06:20.628164] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:31.721 [2024-07-25 14:06:20.628621] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:23:31.721 [2024-07-25 14:06:20.628648] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:23:31.721 [2024-07-25 14:06:20.628939] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.721 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:31.721 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.721 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.721 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.722 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.979 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.979 "name": "raid_bdev1", 00:23:31.979 "uuid": "291405ef-de92-41a2-b600-afc75b15af90", 00:23:31.979 "strip_size_kb": 0, 00:23:31.979 "state": "online", 00:23:31.979 "raid_level": "raid1", 00:23:31.979 "superblock": true, 00:23:31.979 "num_base_bdevs": 3, 00:23:31.979 "num_base_bdevs_discovered": 3, 00:23:31.979 "num_base_bdevs_operational": 3, 00:23:31.979 "base_bdevs_list": [ 00:23:31.979 { 00:23:31.979 "name": "BaseBdev1", 00:23:31.979 "uuid": "af5aff8d-501a-5149-9508-969aea7cb614", 00:23:31.979 "is_configured": true, 00:23:31.979 "data_offset": 2048, 00:23:31.979 "data_size": 63488 00:23:31.979 }, 00:23:31.979 { 00:23:31.979 "name": "BaseBdev2", 00:23:31.979 "uuid": "d62c10e0-4c36-5d3a-b2c9-30e2acc584a5", 00:23:31.979 "is_configured": true, 00:23:31.979 "data_offset": 2048, 00:23:31.979 "data_size": 63488 00:23:31.979 }, 00:23:31.979 { 00:23:31.979 "name": "BaseBdev3", 00:23:31.979 "uuid": "37ba873a-9dc0-5b6f-ad35-16bc7533f7c2", 00:23:31.979 "is_configured": true, 00:23:31.979 "data_offset": 2048, 00:23:31.979 "data_size": 63488 00:23:31.979 } 00:23:31.979 ] 00:23:31.979 }' 00:23:31.979 14:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.979 14:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.544 14:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:23:32.544 14:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:32.802 [2024-07-25 14:06:21.658743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:33.736 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:33.993 [2024-07-25 14:06:22.802337] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:33.993 [2024-07-25 14:06:22.802479] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:33.993 [2024-07-25 14:06:22.802800] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ write = \w\r\i\t\e ]] 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@921 -- # expected_num_base_bdevs=2 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.993 14:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.251 14:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.251 "name": "raid_bdev1", 00:23:34.251 "uuid": "291405ef-de92-41a2-b600-afc75b15af90", 00:23:34.251 "strip_size_kb": 0, 00:23:34.251 "state": "online", 00:23:34.251 "raid_level": "raid1", 00:23:34.251 "superblock": true, 00:23:34.251 "num_base_bdevs": 3, 00:23:34.251 "num_base_bdevs_discovered": 2, 00:23:34.251 "num_base_bdevs_operational": 2, 00:23:34.251 "base_bdevs_list": [ 00:23:34.251 { 00:23:34.251 "name": null, 00:23:34.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.251 "is_configured": false, 00:23:34.251 "data_offset": 2048, 00:23:34.251 "data_size": 63488 00:23:34.251 }, 00:23:34.251 { 00:23:34.251 "name": "BaseBdev2", 00:23:34.251 "uuid": "d62c10e0-4c36-5d3a-b2c9-30e2acc584a5", 00:23:34.251 "is_configured": true, 00:23:34.251 "data_offset": 2048, 00:23:34.251 "data_size": 63488 00:23:34.251 }, 00:23:34.251 { 00:23:34.251 "name": "BaseBdev3", 00:23:34.251 "uuid": "37ba873a-9dc0-5b6f-ad35-16bc7533f7c2", 00:23:34.251 "is_configured": true, 00:23:34.251 "data_offset": 2048, 00:23:34.251 "data_size": 63488 00:23:34.251 } 00:23:34.251 ] 00:23:34.251 }' 00:23:34.251 14:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.251 14:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.816 14:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:35.074 [2024-07-25 14:06:23.994404] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.074 [2024-07-25 14:06:23.994460] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.074 [2024-07-25 14:06:23.997514] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.074 [2024-07-25 14:06:23.997583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.074 [2024-07-25 14:06:23.997669] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.074 [2024-07-25 14:06:23.997683] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:23:35.074 0 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 133937 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 133937 ']' 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 133937 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133937 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133937' 00:23:35.074 killing process with pid 133937 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 133937 00:23:35.074 14:06:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 133937 00:23:35.074 [2024-07-25 14:06:24.039792] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:35.332 [2024-07-25 14:06:24.231184] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.jSY9zlqjWe 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:36.703 00:23:36.703 real 0m8.559s 00:23:36.703 user 0m13.221s 00:23:36.703 sys 0m0.993s 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.703 ************************************ 00:23:36.703 END TEST raid_write_error_test 00:23:36.703 ************************************ 00:23:36.703 14:06:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.703 14:06:25 bdev_raid -- bdev/bdev_raid.sh@1019 -- # for n in {2..4} 00:23:36.703 14:06:25 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:23:36.703 14:06:25 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:23:36.703 14:06:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:36.703 14:06:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.703 14:06:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:36.703 ************************************ 00:23:36.703 START TEST raid_state_function_test 00:23:36.703 ************************************ 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=134143 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134143' 00:23:36.703 Process raid pid: 134143 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:36.703 14:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 134143 /var/tmp/spdk-raid.sock 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 134143 ']' 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.704 14:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.704 [2024-07-25 14:06:25.556148] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:23:36.704 [2024-07-25 14:06:25.556374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.704 [2024-07-25 14:06:25.722015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.961 [2024-07-25 14:06:25.939551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.218 [2024-07-25 14:06:26.145200] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:37.783 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.783 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:23:37.783 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:37.783 [2024-07-25 14:06:26.814788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:37.783 [2024-07-25 14:06:26.814926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:37.783 [2024-07-25 14:06:26.814944] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:37.783 [2024-07-25 14:06:26.814971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:37.783 [2024-07-25 14:06:26.814982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:37.783 [2024-07-25 14:06:26.815000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:37.784 [2024-07-25 14:06:26.815008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:37.784 [2024-07-25 14:06:26.815033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.042 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.042 14:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.042 "name": "Existed_Raid", 00:23:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.042 "strip_size_kb": 64, 00:23:38.042 "state": "configuring", 00:23:38.042 "raid_level": "raid0", 00:23:38.042 "superblock": false, 00:23:38.042 "num_base_bdevs": 4, 00:23:38.042 "num_base_bdevs_discovered": 0, 00:23:38.042 "num_base_bdevs_operational": 4, 00:23:38.042 "base_bdevs_list": [ 00:23:38.042 { 00:23:38.042 "name": "BaseBdev1", 00:23:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.042 "is_configured": false, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 0 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev2", 00:23:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.042 "is_configured": false, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 0 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev3", 00:23:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.042 "is_configured": false, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 0 00:23:38.042 }, 00:23:38.042 { 00:23:38.042 "name": "BaseBdev4", 00:23:38.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.042 "is_configured": false, 00:23:38.042 "data_offset": 0, 00:23:38.042 "data_size": 0 00:23:38.042 } 00:23:38.042 ] 00:23:38.042 }' 00:23:38.042 14:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.042 14:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 14:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:39.230 [2024-07-25 14:06:28.070921] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.230 [2024-07-25 14:06:28.070992] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:23:39.230 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:39.486 [2024-07-25 14:06:28.322973] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:39.486 [2024-07-25 14:06:28.323047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:39.486 [2024-07-25 14:06:28.323061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:39.486 [2024-07-25 14:06:28.323119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:39.486 [2024-07-25 14:06:28.323130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:39.486 [2024-07-25 14:06:28.323166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:39.486 [2024-07-25 14:06:28.323175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:39.486 [2024-07-25 14:06:28.323201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:39.486 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:39.742 [2024-07-25 14:06:28.599311] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.742 BaseBdev1 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:39.742 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:39.743 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:39.999 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:40.256 [ 00:23:40.256 { 00:23:40.256 "name": "BaseBdev1", 00:23:40.256 "aliases": [ 00:23:40.256 "8ea308ad-e2a6-4b0b-8e99-829218c993d0" 00:23:40.256 ], 00:23:40.256 "product_name": "Malloc disk", 00:23:40.256 "block_size": 512, 00:23:40.256 "num_blocks": 65536, 00:23:40.256 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:40.256 "assigned_rate_limits": { 00:23:40.256 "rw_ios_per_sec": 0, 00:23:40.256 "rw_mbytes_per_sec": 0, 00:23:40.256 "r_mbytes_per_sec": 0, 00:23:40.256 "w_mbytes_per_sec": 0 00:23:40.256 }, 00:23:40.256 "claimed": true, 00:23:40.256 "claim_type": "exclusive_write", 00:23:40.256 "zoned": false, 00:23:40.256 "supported_io_types": { 00:23:40.256 "read": true, 00:23:40.256 "write": true, 00:23:40.256 "unmap": true, 00:23:40.256 "flush": true, 00:23:40.256 "reset": true, 00:23:40.256 "nvme_admin": false, 00:23:40.256 "nvme_io": false, 00:23:40.256 "nvme_io_md": false, 00:23:40.256 "write_zeroes": true, 00:23:40.256 "zcopy": true, 00:23:40.256 "get_zone_info": false, 00:23:40.256 "zone_management": false, 00:23:40.256 "zone_append": false, 00:23:40.256 "compare": false, 00:23:40.256 "compare_and_write": false, 00:23:40.256 "abort": true, 00:23:40.256 "seek_hole": false, 00:23:40.256 "seek_data": false, 00:23:40.256 "copy": true, 00:23:40.256 "nvme_iov_md": false 00:23:40.256 }, 00:23:40.256 "memory_domains": [ 00:23:40.256 { 00:23:40.256 "dma_device_id": "system", 00:23:40.256 "dma_device_type": 1 00:23:40.256 }, 00:23:40.256 { 00:23:40.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.256 "dma_device_type": 2 00:23:40.256 } 00:23:40.256 ], 00:23:40.256 "driver_specific": {} 00:23:40.256 } 00:23:40.256 ] 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.256 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.257 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.257 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.514 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.514 "name": "Existed_Raid", 00:23:40.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.514 "strip_size_kb": 64, 00:23:40.514 "state": "configuring", 00:23:40.514 "raid_level": "raid0", 00:23:40.514 "superblock": false, 00:23:40.514 "num_base_bdevs": 4, 00:23:40.514 "num_base_bdevs_discovered": 1, 00:23:40.514 "num_base_bdevs_operational": 4, 00:23:40.514 "base_bdevs_list": [ 00:23:40.514 { 00:23:40.514 "name": "BaseBdev1", 00:23:40.514 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:40.514 "is_configured": true, 00:23:40.514 "data_offset": 0, 00:23:40.514 "data_size": 65536 00:23:40.514 }, 00:23:40.514 { 00:23:40.514 "name": "BaseBdev2", 00:23:40.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.514 "is_configured": false, 00:23:40.514 "data_offset": 0, 00:23:40.514 "data_size": 0 00:23:40.514 }, 00:23:40.514 { 00:23:40.514 "name": "BaseBdev3", 00:23:40.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.514 "is_configured": false, 00:23:40.514 "data_offset": 0, 00:23:40.514 "data_size": 0 00:23:40.514 }, 00:23:40.514 { 00:23:40.514 "name": "BaseBdev4", 00:23:40.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.514 "is_configured": false, 00:23:40.514 "data_offset": 0, 00:23:40.514 "data_size": 0 00:23:40.514 } 00:23:40.514 ] 00:23:40.514 }' 00:23:40.514 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.514 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.079 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:41.335 [2024-07-25 14:06:30.355798] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:41.336 [2024-07-25 14:06:30.355893] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:23:41.336 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:41.600 [2024-07-25 14:06:30.607860] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.600 [2024-07-25 14:06:30.610092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:41.600 [2024-07-25 14:06:30.610164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:41.600 [2024-07-25 14:06:30.610178] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:41.600 [2024-07-25 14:06:30.610208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:41.600 [2024-07-25 14:06:30.610219] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:41.600 [2024-07-25 14:06:30.610238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.600 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.887 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.887 "name": "Existed_Raid", 00:23:41.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.887 "strip_size_kb": 64, 00:23:41.887 "state": "configuring", 00:23:41.887 "raid_level": "raid0", 00:23:41.887 "superblock": false, 00:23:41.887 "num_base_bdevs": 4, 00:23:41.887 "num_base_bdevs_discovered": 1, 00:23:41.887 "num_base_bdevs_operational": 4, 00:23:41.887 "base_bdevs_list": [ 00:23:41.887 { 00:23:41.887 "name": "BaseBdev1", 00:23:41.887 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:41.887 "is_configured": true, 00:23:41.887 "data_offset": 0, 00:23:41.887 "data_size": 65536 00:23:41.887 }, 00:23:41.887 { 00:23:41.887 "name": "BaseBdev2", 00:23:41.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.887 "is_configured": false, 00:23:41.887 "data_offset": 0, 00:23:41.887 "data_size": 0 00:23:41.887 }, 00:23:41.887 { 00:23:41.887 "name": "BaseBdev3", 00:23:41.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.887 "is_configured": false, 00:23:41.887 "data_offset": 0, 00:23:41.887 "data_size": 0 00:23:41.887 }, 00:23:41.887 { 00:23:41.887 "name": "BaseBdev4", 00:23:41.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.887 "is_configured": false, 00:23:41.887 "data_offset": 0, 00:23:41.887 "data_size": 0 00:23:41.887 } 00:23:41.887 ] 00:23:41.887 }' 00:23:41.887 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.887 14:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:42.821 [2024-07-25 14:06:31.832797] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:42.821 BaseBdev2 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:42.821 14:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:43.079 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:43.337 [ 00:23:43.337 { 00:23:43.337 "name": "BaseBdev2", 00:23:43.337 "aliases": [ 00:23:43.337 "4aaa12d0-9a43-4156-bd9e-12908c69f8b0" 00:23:43.337 ], 00:23:43.337 "product_name": "Malloc disk", 00:23:43.337 "block_size": 512, 00:23:43.337 "num_blocks": 65536, 00:23:43.337 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:43.337 "assigned_rate_limits": { 00:23:43.337 "rw_ios_per_sec": 0, 00:23:43.337 "rw_mbytes_per_sec": 0, 00:23:43.337 "r_mbytes_per_sec": 0, 00:23:43.337 "w_mbytes_per_sec": 0 00:23:43.337 }, 00:23:43.337 "claimed": true, 00:23:43.337 "claim_type": "exclusive_write", 00:23:43.337 "zoned": false, 00:23:43.337 "supported_io_types": { 00:23:43.337 "read": true, 00:23:43.337 "write": true, 00:23:43.337 "unmap": true, 00:23:43.337 "flush": true, 00:23:43.337 "reset": true, 00:23:43.337 "nvme_admin": false, 00:23:43.337 "nvme_io": false, 00:23:43.337 "nvme_io_md": false, 00:23:43.337 "write_zeroes": true, 00:23:43.337 "zcopy": true, 00:23:43.337 "get_zone_info": false, 00:23:43.337 "zone_management": false, 00:23:43.337 "zone_append": false, 00:23:43.337 "compare": false, 00:23:43.337 "compare_and_write": false, 00:23:43.337 "abort": true, 00:23:43.337 "seek_hole": false, 00:23:43.337 "seek_data": false, 00:23:43.337 "copy": true, 00:23:43.337 "nvme_iov_md": false 00:23:43.337 }, 00:23:43.337 "memory_domains": [ 00:23:43.337 { 00:23:43.337 "dma_device_id": "system", 00:23:43.337 "dma_device_type": 1 00:23:43.337 }, 00:23:43.337 { 00:23:43.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.337 "dma_device_type": 2 00:23:43.337 } 00:23:43.337 ], 00:23:43.337 "driver_specific": {} 00:23:43.337 } 00:23:43.337 ] 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.337 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.902 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.902 "name": "Existed_Raid", 00:23:43.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.902 "strip_size_kb": 64, 00:23:43.902 "state": "configuring", 00:23:43.902 "raid_level": "raid0", 00:23:43.902 "superblock": false, 00:23:43.902 "num_base_bdevs": 4, 00:23:43.902 "num_base_bdevs_discovered": 2, 00:23:43.902 "num_base_bdevs_operational": 4, 00:23:43.902 "base_bdevs_list": [ 00:23:43.902 { 00:23:43.902 "name": "BaseBdev1", 00:23:43.902 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:43.902 "is_configured": true, 00:23:43.902 "data_offset": 0, 00:23:43.902 "data_size": 65536 00:23:43.902 }, 00:23:43.902 { 00:23:43.902 "name": "BaseBdev2", 00:23:43.902 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:43.902 "is_configured": true, 00:23:43.902 "data_offset": 0, 00:23:43.902 "data_size": 65536 00:23:43.902 }, 00:23:43.902 { 00:23:43.902 "name": "BaseBdev3", 00:23:43.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.902 "is_configured": false, 00:23:43.902 "data_offset": 0, 00:23:43.902 "data_size": 0 00:23:43.902 }, 00:23:43.902 { 00:23:43.902 "name": "BaseBdev4", 00:23:43.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.902 "is_configured": false, 00:23:43.902 "data_offset": 0, 00:23:43.902 "data_size": 0 00:23:43.902 } 00:23:43.902 ] 00:23:43.902 }' 00:23:43.902 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.902 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.467 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:44.725 [2024-07-25 14:06:33.593265] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:44.725 BaseBdev3 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:44.725 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.983 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:45.242 [ 00:23:45.242 { 00:23:45.242 "name": "BaseBdev3", 00:23:45.242 "aliases": [ 00:23:45.242 "71516f75-bba1-4781-b6b9-1a60dc723c19" 00:23:45.242 ], 00:23:45.242 "product_name": "Malloc disk", 00:23:45.242 "block_size": 512, 00:23:45.242 "num_blocks": 65536, 00:23:45.242 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:45.242 "assigned_rate_limits": { 00:23:45.242 "rw_ios_per_sec": 0, 00:23:45.242 "rw_mbytes_per_sec": 0, 00:23:45.242 "r_mbytes_per_sec": 0, 00:23:45.242 "w_mbytes_per_sec": 0 00:23:45.242 }, 00:23:45.242 "claimed": true, 00:23:45.242 "claim_type": "exclusive_write", 00:23:45.242 "zoned": false, 00:23:45.242 "supported_io_types": { 00:23:45.242 "read": true, 00:23:45.242 "write": true, 00:23:45.242 "unmap": true, 00:23:45.242 "flush": true, 00:23:45.242 "reset": true, 00:23:45.242 "nvme_admin": false, 00:23:45.242 "nvme_io": false, 00:23:45.242 "nvme_io_md": false, 00:23:45.242 "write_zeroes": true, 00:23:45.242 "zcopy": true, 00:23:45.242 "get_zone_info": false, 00:23:45.242 "zone_management": false, 00:23:45.242 "zone_append": false, 00:23:45.242 "compare": false, 00:23:45.242 "compare_and_write": false, 00:23:45.242 "abort": true, 00:23:45.242 "seek_hole": false, 00:23:45.242 "seek_data": false, 00:23:45.242 "copy": true, 00:23:45.242 "nvme_iov_md": false 00:23:45.242 }, 00:23:45.242 "memory_domains": [ 00:23:45.242 { 00:23:45.242 "dma_device_id": "system", 00:23:45.242 "dma_device_type": 1 00:23:45.242 }, 00:23:45.242 { 00:23:45.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.242 "dma_device_type": 2 00:23:45.242 } 00:23:45.242 ], 00:23:45.242 "driver_specific": {} 00:23:45.242 } 00:23:45.242 ] 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.242 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.499 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.499 "name": "Existed_Raid", 00:23:45.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.499 "strip_size_kb": 64, 00:23:45.499 "state": "configuring", 00:23:45.499 "raid_level": "raid0", 00:23:45.499 "superblock": false, 00:23:45.499 "num_base_bdevs": 4, 00:23:45.499 "num_base_bdevs_discovered": 3, 00:23:45.499 "num_base_bdevs_operational": 4, 00:23:45.499 "base_bdevs_list": [ 00:23:45.499 { 00:23:45.499 "name": "BaseBdev1", 00:23:45.499 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:45.499 "is_configured": true, 00:23:45.499 "data_offset": 0, 00:23:45.499 "data_size": 65536 00:23:45.499 }, 00:23:45.499 { 00:23:45.499 "name": "BaseBdev2", 00:23:45.499 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:45.499 "is_configured": true, 00:23:45.499 "data_offset": 0, 00:23:45.499 "data_size": 65536 00:23:45.499 }, 00:23:45.499 { 00:23:45.499 "name": "BaseBdev3", 00:23:45.499 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:45.499 "is_configured": true, 00:23:45.499 "data_offset": 0, 00:23:45.499 "data_size": 65536 00:23:45.499 }, 00:23:45.499 { 00:23:45.499 "name": "BaseBdev4", 00:23:45.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.499 "is_configured": false, 00:23:45.499 "data_offset": 0, 00:23:45.499 "data_size": 0 00:23:45.499 } 00:23:45.499 ] 00:23:45.499 }' 00:23:45.499 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.499 14:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:46.431 [2024-07-25 14:06:35.379902] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:46.431 [2024-07-25 14:06:35.379993] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:23:46.431 [2024-07-25 14:06:35.380005] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:46.431 [2024-07-25 14:06:35.380129] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:46.431 [2024-07-25 14:06:35.380539] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:23:46.431 [2024-07-25 14:06:35.380567] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:23:46.431 [2024-07-25 14:06:35.380850] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.431 BaseBdev4 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:46.431 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:46.995 14:06:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:46.995 [ 00:23:46.995 { 00:23:46.995 "name": "BaseBdev4", 00:23:46.995 "aliases": [ 00:23:46.995 "a128ef15-947a-413b-9b7f-99f7ee24e427" 00:23:46.995 ], 00:23:46.995 "product_name": "Malloc disk", 00:23:46.995 "block_size": 512, 00:23:46.995 "num_blocks": 65536, 00:23:46.995 "uuid": "a128ef15-947a-413b-9b7f-99f7ee24e427", 00:23:46.995 "assigned_rate_limits": { 00:23:46.995 "rw_ios_per_sec": 0, 00:23:46.996 "rw_mbytes_per_sec": 0, 00:23:46.996 "r_mbytes_per_sec": 0, 00:23:46.996 "w_mbytes_per_sec": 0 00:23:46.996 }, 00:23:46.996 "claimed": true, 00:23:46.996 "claim_type": "exclusive_write", 00:23:46.996 "zoned": false, 00:23:46.996 "supported_io_types": { 00:23:46.996 "read": true, 00:23:46.996 "write": true, 00:23:46.996 "unmap": true, 00:23:46.996 "flush": true, 00:23:46.996 "reset": true, 00:23:46.996 "nvme_admin": false, 00:23:46.996 "nvme_io": false, 00:23:46.996 "nvme_io_md": false, 00:23:46.996 "write_zeroes": true, 00:23:46.996 "zcopy": true, 00:23:46.996 "get_zone_info": false, 00:23:46.996 "zone_management": false, 00:23:46.996 "zone_append": false, 00:23:46.996 "compare": false, 00:23:46.996 "compare_and_write": false, 00:23:46.996 "abort": true, 00:23:46.996 "seek_hole": false, 00:23:46.996 "seek_data": false, 00:23:46.996 "copy": true, 00:23:46.996 "nvme_iov_md": false 00:23:46.996 }, 00:23:46.996 "memory_domains": [ 00:23:46.996 { 00:23:46.996 "dma_device_id": "system", 00:23:46.996 "dma_device_type": 1 00:23:46.996 }, 00:23:46.996 { 00:23:46.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.996 "dma_device_type": 2 00:23:46.996 } 00:23:46.996 ], 00:23:46.996 "driver_specific": {} 00:23:46.996 } 00:23:46.996 ] 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.253 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.511 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.511 "name": "Existed_Raid", 00:23:47.511 "uuid": "81298487-b269-4f58-87b2-7bcd2acca546", 00:23:47.511 "strip_size_kb": 64, 00:23:47.511 "state": "online", 00:23:47.511 "raid_level": "raid0", 00:23:47.511 "superblock": false, 00:23:47.511 "num_base_bdevs": 4, 00:23:47.511 "num_base_bdevs_discovered": 4, 00:23:47.511 "num_base_bdevs_operational": 4, 00:23:47.511 "base_bdevs_list": [ 00:23:47.511 { 00:23:47.511 "name": "BaseBdev1", 00:23:47.511 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:47.511 "is_configured": true, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 }, 00:23:47.511 { 00:23:47.511 "name": "BaseBdev2", 00:23:47.511 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:47.511 "is_configured": true, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 }, 00:23:47.511 { 00:23:47.511 "name": "BaseBdev3", 00:23:47.511 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:47.511 "is_configured": true, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 }, 00:23:47.511 { 00:23:47.511 "name": "BaseBdev4", 00:23:47.511 "uuid": "a128ef15-947a-413b-9b7f-99f7ee24e427", 00:23:47.511 "is_configured": true, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 } 00:23:47.511 ] 00:23:47.511 }' 00:23:47.511 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.511 14:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:48.077 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:48.335 [2024-07-25 14:06:37.262751] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:48.335 "name": "Existed_Raid", 00:23:48.335 "aliases": [ 00:23:48.335 "81298487-b269-4f58-87b2-7bcd2acca546" 00:23:48.335 ], 00:23:48.335 "product_name": "Raid Volume", 00:23:48.335 "block_size": 512, 00:23:48.335 "num_blocks": 262144, 00:23:48.335 "uuid": "81298487-b269-4f58-87b2-7bcd2acca546", 00:23:48.335 "assigned_rate_limits": { 00:23:48.335 "rw_ios_per_sec": 0, 00:23:48.335 "rw_mbytes_per_sec": 0, 00:23:48.335 "r_mbytes_per_sec": 0, 00:23:48.335 "w_mbytes_per_sec": 0 00:23:48.335 }, 00:23:48.335 "claimed": false, 00:23:48.335 "zoned": false, 00:23:48.335 "supported_io_types": { 00:23:48.335 "read": true, 00:23:48.335 "write": true, 00:23:48.335 "unmap": true, 00:23:48.335 "flush": true, 00:23:48.335 "reset": true, 00:23:48.335 "nvme_admin": false, 00:23:48.335 "nvme_io": false, 00:23:48.335 "nvme_io_md": false, 00:23:48.335 "write_zeroes": true, 00:23:48.335 "zcopy": false, 00:23:48.335 "get_zone_info": false, 00:23:48.335 "zone_management": false, 00:23:48.335 "zone_append": false, 00:23:48.335 "compare": false, 00:23:48.335 "compare_and_write": false, 00:23:48.335 "abort": false, 00:23:48.335 "seek_hole": false, 00:23:48.335 "seek_data": false, 00:23:48.335 "copy": false, 00:23:48.335 "nvme_iov_md": false 00:23:48.335 }, 00:23:48.335 "memory_domains": [ 00:23:48.335 { 00:23:48.335 "dma_device_id": "system", 00:23:48.335 "dma_device_type": 1 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.335 "dma_device_type": 2 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "system", 00:23:48.335 "dma_device_type": 1 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.335 "dma_device_type": 2 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "system", 00:23:48.335 "dma_device_type": 1 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.335 "dma_device_type": 2 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "system", 00:23:48.335 "dma_device_type": 1 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.335 "dma_device_type": 2 00:23:48.335 } 00:23:48.335 ], 00:23:48.335 "driver_specific": { 00:23:48.335 "raid": { 00:23:48.335 "uuid": "81298487-b269-4f58-87b2-7bcd2acca546", 00:23:48.335 "strip_size_kb": 64, 00:23:48.335 "state": "online", 00:23:48.335 "raid_level": "raid0", 00:23:48.335 "superblock": false, 00:23:48.335 "num_base_bdevs": 4, 00:23:48.335 "num_base_bdevs_discovered": 4, 00:23:48.335 "num_base_bdevs_operational": 4, 00:23:48.335 "base_bdevs_list": [ 00:23:48.335 { 00:23:48.335 "name": "BaseBdev1", 00:23:48.335 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:48.335 "is_configured": true, 00:23:48.335 "data_offset": 0, 00:23:48.335 "data_size": 65536 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "name": "BaseBdev2", 00:23:48.335 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:48.335 "is_configured": true, 00:23:48.335 "data_offset": 0, 00:23:48.335 "data_size": 65536 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "name": "BaseBdev3", 00:23:48.335 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:48.335 "is_configured": true, 00:23:48.335 "data_offset": 0, 00:23:48.335 "data_size": 65536 00:23:48.335 }, 00:23:48.335 { 00:23:48.335 "name": "BaseBdev4", 00:23:48.335 "uuid": "a128ef15-947a-413b-9b7f-99f7ee24e427", 00:23:48.335 "is_configured": true, 00:23:48.335 "data_offset": 0, 00:23:48.335 "data_size": 65536 00:23:48.335 } 00:23:48.335 ] 00:23:48.335 } 00:23:48.335 } 00:23:48.335 }' 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:48.335 BaseBdev2 00:23:48.335 BaseBdev3 00:23:48.335 BaseBdev4' 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:48.335 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:48.594 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:48.594 "name": "BaseBdev1", 00:23:48.594 "aliases": [ 00:23:48.594 "8ea308ad-e2a6-4b0b-8e99-829218c993d0" 00:23:48.594 ], 00:23:48.594 "product_name": "Malloc disk", 00:23:48.594 "block_size": 512, 00:23:48.594 "num_blocks": 65536, 00:23:48.594 "uuid": "8ea308ad-e2a6-4b0b-8e99-829218c993d0", 00:23:48.594 "assigned_rate_limits": { 00:23:48.594 "rw_ios_per_sec": 0, 00:23:48.594 "rw_mbytes_per_sec": 0, 00:23:48.594 "r_mbytes_per_sec": 0, 00:23:48.594 "w_mbytes_per_sec": 0 00:23:48.594 }, 00:23:48.594 "claimed": true, 00:23:48.594 "claim_type": "exclusive_write", 00:23:48.594 "zoned": false, 00:23:48.594 "supported_io_types": { 00:23:48.594 "read": true, 00:23:48.594 "write": true, 00:23:48.594 "unmap": true, 00:23:48.594 "flush": true, 00:23:48.594 "reset": true, 00:23:48.594 "nvme_admin": false, 00:23:48.594 "nvme_io": false, 00:23:48.594 "nvme_io_md": false, 00:23:48.594 "write_zeroes": true, 00:23:48.594 "zcopy": true, 00:23:48.594 "get_zone_info": false, 00:23:48.594 "zone_management": false, 00:23:48.594 "zone_append": false, 00:23:48.594 "compare": false, 00:23:48.594 "compare_and_write": false, 00:23:48.594 "abort": true, 00:23:48.594 "seek_hole": false, 00:23:48.594 "seek_data": false, 00:23:48.594 "copy": true, 00:23:48.594 "nvme_iov_md": false 00:23:48.594 }, 00:23:48.594 "memory_domains": [ 00:23:48.594 { 00:23:48.594 "dma_device_id": "system", 00:23:48.594 "dma_device_type": 1 00:23:48.594 }, 00:23:48.594 { 00:23:48.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.594 "dma_device_type": 2 00:23:48.594 } 00:23:48.594 ], 00:23:48.594 "driver_specific": {} 00:23:48.594 }' 00:23:48.594 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.853 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:49.110 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:49.110 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.110 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.110 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:49.110 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:49.110 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:49.110 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:49.368 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:49.368 "name": "BaseBdev2", 00:23:49.368 "aliases": [ 00:23:49.368 "4aaa12d0-9a43-4156-bd9e-12908c69f8b0" 00:23:49.368 ], 00:23:49.368 "product_name": "Malloc disk", 00:23:49.368 "block_size": 512, 00:23:49.368 "num_blocks": 65536, 00:23:49.368 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:49.368 "assigned_rate_limits": { 00:23:49.368 "rw_ios_per_sec": 0, 00:23:49.368 "rw_mbytes_per_sec": 0, 00:23:49.368 "r_mbytes_per_sec": 0, 00:23:49.368 "w_mbytes_per_sec": 0 00:23:49.368 }, 00:23:49.368 "claimed": true, 00:23:49.368 "claim_type": "exclusive_write", 00:23:49.368 "zoned": false, 00:23:49.368 "supported_io_types": { 00:23:49.368 "read": true, 00:23:49.368 "write": true, 00:23:49.368 "unmap": true, 00:23:49.368 "flush": true, 00:23:49.368 "reset": true, 00:23:49.368 "nvme_admin": false, 00:23:49.368 "nvme_io": false, 00:23:49.368 "nvme_io_md": false, 00:23:49.368 "write_zeroes": true, 00:23:49.368 "zcopy": true, 00:23:49.368 "get_zone_info": false, 00:23:49.368 "zone_management": false, 00:23:49.368 "zone_append": false, 00:23:49.368 "compare": false, 00:23:49.368 "compare_and_write": false, 00:23:49.368 "abort": true, 00:23:49.368 "seek_hole": false, 00:23:49.368 "seek_data": false, 00:23:49.368 "copy": true, 00:23:49.368 "nvme_iov_md": false 00:23:49.368 }, 00:23:49.368 "memory_domains": [ 00:23:49.368 { 00:23:49.368 "dma_device_id": "system", 00:23:49.368 "dma_device_type": 1 00:23:49.368 }, 00:23:49.368 { 00:23:49.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.368 "dma_device_type": 2 00:23:49.368 } 00:23:49.368 ], 00:23:49.368 "driver_specific": {} 00:23:49.368 }' 00:23:49.368 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:49.368 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:49.368 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:49.368 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.626 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.884 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:49.884 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:49.884 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:49.884 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:50.141 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:50.141 "name": "BaseBdev3", 00:23:50.141 "aliases": [ 00:23:50.141 "71516f75-bba1-4781-b6b9-1a60dc723c19" 00:23:50.141 ], 00:23:50.141 "product_name": "Malloc disk", 00:23:50.141 "block_size": 512, 00:23:50.141 "num_blocks": 65536, 00:23:50.141 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:50.141 "assigned_rate_limits": { 00:23:50.141 "rw_ios_per_sec": 0, 00:23:50.141 "rw_mbytes_per_sec": 0, 00:23:50.141 "r_mbytes_per_sec": 0, 00:23:50.141 "w_mbytes_per_sec": 0 00:23:50.141 }, 00:23:50.141 "claimed": true, 00:23:50.141 "claim_type": "exclusive_write", 00:23:50.141 "zoned": false, 00:23:50.141 "supported_io_types": { 00:23:50.141 "read": true, 00:23:50.141 "write": true, 00:23:50.141 "unmap": true, 00:23:50.141 "flush": true, 00:23:50.141 "reset": true, 00:23:50.141 "nvme_admin": false, 00:23:50.141 "nvme_io": false, 00:23:50.141 "nvme_io_md": false, 00:23:50.141 "write_zeroes": true, 00:23:50.141 "zcopy": true, 00:23:50.141 "get_zone_info": false, 00:23:50.141 "zone_management": false, 00:23:50.142 "zone_append": false, 00:23:50.142 "compare": false, 00:23:50.142 "compare_and_write": false, 00:23:50.142 "abort": true, 00:23:50.142 "seek_hole": false, 00:23:50.142 "seek_data": false, 00:23:50.142 "copy": true, 00:23:50.142 "nvme_iov_md": false 00:23:50.142 }, 00:23:50.142 "memory_domains": [ 00:23:50.142 { 00:23:50.142 "dma_device_id": "system", 00:23:50.142 "dma_device_type": 1 00:23:50.142 }, 00:23:50.142 { 00:23:50.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.142 "dma_device_type": 2 00:23:50.142 } 00:23:50.142 ], 00:23:50.142 "driver_specific": {} 00:23:50.142 }' 00:23:50.142 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.142 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.142 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:50.142 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.142 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:50.399 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:50.400 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:50.400 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:50.964 "name": "BaseBdev4", 00:23:50.964 "aliases": [ 00:23:50.964 "a128ef15-947a-413b-9b7f-99f7ee24e427" 00:23:50.964 ], 00:23:50.964 "product_name": "Malloc disk", 00:23:50.964 "block_size": 512, 00:23:50.964 "num_blocks": 65536, 00:23:50.964 "uuid": "a128ef15-947a-413b-9b7f-99f7ee24e427", 00:23:50.964 "assigned_rate_limits": { 00:23:50.964 "rw_ios_per_sec": 0, 00:23:50.964 "rw_mbytes_per_sec": 0, 00:23:50.964 "r_mbytes_per_sec": 0, 00:23:50.964 "w_mbytes_per_sec": 0 00:23:50.964 }, 00:23:50.964 "claimed": true, 00:23:50.964 "claim_type": "exclusive_write", 00:23:50.964 "zoned": false, 00:23:50.964 "supported_io_types": { 00:23:50.964 "read": true, 00:23:50.964 "write": true, 00:23:50.964 "unmap": true, 00:23:50.964 "flush": true, 00:23:50.964 "reset": true, 00:23:50.964 "nvme_admin": false, 00:23:50.964 "nvme_io": false, 00:23:50.964 "nvme_io_md": false, 00:23:50.964 "write_zeroes": true, 00:23:50.964 "zcopy": true, 00:23:50.964 "get_zone_info": false, 00:23:50.964 "zone_management": false, 00:23:50.964 "zone_append": false, 00:23:50.964 "compare": false, 00:23:50.964 "compare_and_write": false, 00:23:50.964 "abort": true, 00:23:50.964 "seek_hole": false, 00:23:50.964 "seek_data": false, 00:23:50.964 "copy": true, 00:23:50.964 "nvme_iov_md": false 00:23:50.964 }, 00:23:50.964 "memory_domains": [ 00:23:50.964 { 00:23:50.964 "dma_device_id": "system", 00:23:50.964 "dma_device_type": 1 00:23:50.964 }, 00:23:50.964 { 00:23:50.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.964 "dma_device_type": 2 00:23:50.964 } 00:23:50.964 ], 00:23:50.964 "driver_specific": {} 00:23:50.964 }' 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.964 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.965 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:50.965 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.965 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.965 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:50.965 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.222 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.222 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:51.222 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:51.480 [2024-07-25 14:06:40.383146] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:51.480 [2024-07-25 14:06:40.383205] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.480 [2024-07-25 14:06:40.383281] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.480 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.737 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.737 "name": "Existed_Raid", 00:23:51.737 "uuid": "81298487-b269-4f58-87b2-7bcd2acca546", 00:23:51.737 "strip_size_kb": 64, 00:23:51.737 "state": "offline", 00:23:51.737 "raid_level": "raid0", 00:23:51.737 "superblock": false, 00:23:51.737 "num_base_bdevs": 4, 00:23:51.737 "num_base_bdevs_discovered": 3, 00:23:51.737 "num_base_bdevs_operational": 3, 00:23:51.737 "base_bdevs_list": [ 00:23:51.737 { 00:23:51.737 "name": null, 00:23:51.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.737 "is_configured": false, 00:23:51.737 "data_offset": 0, 00:23:51.737 "data_size": 65536 00:23:51.737 }, 00:23:51.737 { 00:23:51.737 "name": "BaseBdev2", 00:23:51.737 "uuid": "4aaa12d0-9a43-4156-bd9e-12908c69f8b0", 00:23:51.737 "is_configured": true, 00:23:51.737 "data_offset": 0, 00:23:51.737 "data_size": 65536 00:23:51.737 }, 00:23:51.737 { 00:23:51.737 "name": "BaseBdev3", 00:23:51.737 "uuid": "71516f75-bba1-4781-b6b9-1a60dc723c19", 00:23:51.737 "is_configured": true, 00:23:51.738 "data_offset": 0, 00:23:51.738 "data_size": 65536 00:23:51.738 }, 00:23:51.738 { 00:23:51.738 "name": "BaseBdev4", 00:23:51.738 "uuid": "a128ef15-947a-413b-9b7f-99f7ee24e427", 00:23:51.738 "is_configured": true, 00:23:51.738 "data_offset": 0, 00:23:51.738 "data_size": 65536 00:23:51.738 } 00:23:51.738 ] 00:23:51.738 }' 00:23:51.738 14:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.738 14:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.670 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:52.670 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:52.670 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:52.670 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.670 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:52.671 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:52.671 14:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:53.235 [2024-07-25 14:06:41.976588] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:53.235 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:53.235 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:53.235 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.235 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:53.492 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:53.492 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:53.492 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:53.750 [2024-07-25 14:06:42.701997] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:54.008 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:54.008 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:54.008 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.008 14:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:54.266 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:54.266 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:54.266 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:54.522 [2024-07-25 14:06:43.379957] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:54.522 [2024-07-25 14:06:43.380057] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:23:54.522 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:54.522 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:54.522 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.522 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:54.779 14:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:55.037 BaseBdev2 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:55.037 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.602 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:55.602 [ 00:23:55.602 { 00:23:55.602 "name": "BaseBdev2", 00:23:55.602 "aliases": [ 00:23:55.602 "8e8a4936-80b3-48bb-af85-7933ebbe2c2c" 00:23:55.602 ], 00:23:55.602 "product_name": "Malloc disk", 00:23:55.602 "block_size": 512, 00:23:55.602 "num_blocks": 65536, 00:23:55.602 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:23:55.602 "assigned_rate_limits": { 00:23:55.602 "rw_ios_per_sec": 0, 00:23:55.602 "rw_mbytes_per_sec": 0, 00:23:55.602 "r_mbytes_per_sec": 0, 00:23:55.602 "w_mbytes_per_sec": 0 00:23:55.602 }, 00:23:55.602 "claimed": false, 00:23:55.602 "zoned": false, 00:23:55.602 "supported_io_types": { 00:23:55.602 "read": true, 00:23:55.602 "write": true, 00:23:55.602 "unmap": true, 00:23:55.602 "flush": true, 00:23:55.602 "reset": true, 00:23:55.602 "nvme_admin": false, 00:23:55.602 "nvme_io": false, 00:23:55.602 "nvme_io_md": false, 00:23:55.602 "write_zeroes": true, 00:23:55.602 "zcopy": true, 00:23:55.602 "get_zone_info": false, 00:23:55.602 "zone_management": false, 00:23:55.602 "zone_append": false, 00:23:55.602 "compare": false, 00:23:55.602 "compare_and_write": false, 00:23:55.602 "abort": true, 00:23:55.602 "seek_hole": false, 00:23:55.602 "seek_data": false, 00:23:55.602 "copy": true, 00:23:55.602 "nvme_iov_md": false 00:23:55.602 }, 00:23:55.602 "memory_domains": [ 00:23:55.602 { 00:23:55.602 "dma_device_id": "system", 00:23:55.602 "dma_device_type": 1 00:23:55.602 }, 00:23:55.602 { 00:23:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.602 "dma_device_type": 2 00:23:55.602 } 00:23:55.603 ], 00:23:55.603 "driver_specific": {} 00:23:55.603 } 00:23:55.603 ] 00:23:55.603 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:55.603 14:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:55.603 14:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:55.603 14:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:55.860 BaseBdev3 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:56.117 14:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:56.375 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:56.639 [ 00:23:56.639 { 00:23:56.639 "name": "BaseBdev3", 00:23:56.639 "aliases": [ 00:23:56.639 "7cbe9c71-b90d-49be-8e91-fa673b60a492" 00:23:56.639 ], 00:23:56.639 "product_name": "Malloc disk", 00:23:56.639 "block_size": 512, 00:23:56.639 "num_blocks": 65536, 00:23:56.639 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:23:56.639 "assigned_rate_limits": { 00:23:56.639 "rw_ios_per_sec": 0, 00:23:56.639 "rw_mbytes_per_sec": 0, 00:23:56.639 "r_mbytes_per_sec": 0, 00:23:56.639 "w_mbytes_per_sec": 0 00:23:56.639 }, 00:23:56.639 "claimed": false, 00:23:56.639 "zoned": false, 00:23:56.639 "supported_io_types": { 00:23:56.639 "read": true, 00:23:56.639 "write": true, 00:23:56.639 "unmap": true, 00:23:56.639 "flush": true, 00:23:56.639 "reset": true, 00:23:56.639 "nvme_admin": false, 00:23:56.639 "nvme_io": false, 00:23:56.639 "nvme_io_md": false, 00:23:56.639 "write_zeroes": true, 00:23:56.639 "zcopy": true, 00:23:56.639 "get_zone_info": false, 00:23:56.639 "zone_management": false, 00:23:56.639 "zone_append": false, 00:23:56.639 "compare": false, 00:23:56.639 "compare_and_write": false, 00:23:56.639 "abort": true, 00:23:56.639 "seek_hole": false, 00:23:56.639 "seek_data": false, 00:23:56.639 "copy": true, 00:23:56.639 "nvme_iov_md": false 00:23:56.639 }, 00:23:56.639 "memory_domains": [ 00:23:56.639 { 00:23:56.639 "dma_device_id": "system", 00:23:56.639 "dma_device_type": 1 00:23:56.639 }, 00:23:56.639 { 00:23:56.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.639 "dma_device_type": 2 00:23:56.639 } 00:23:56.639 ], 00:23:56.639 "driver_specific": {} 00:23:56.639 } 00:23:56.639 ] 00:23:56.639 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:56.639 14:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:56.639 14:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:56.639 14:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:56.902 BaseBdev4 00:23:56.902 14:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:56.902 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:56.902 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:56.902 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:56.903 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:56.903 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:56.903 14:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:57.160 14:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:57.422 [ 00:23:57.422 { 00:23:57.422 "name": "BaseBdev4", 00:23:57.422 "aliases": [ 00:23:57.422 "2ecf88bd-ed29-49ea-9c07-93a89fdf39be" 00:23:57.422 ], 00:23:57.422 "product_name": "Malloc disk", 00:23:57.422 "block_size": 512, 00:23:57.422 "num_blocks": 65536, 00:23:57.422 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:23:57.422 "assigned_rate_limits": { 00:23:57.422 "rw_ios_per_sec": 0, 00:23:57.422 "rw_mbytes_per_sec": 0, 00:23:57.422 "r_mbytes_per_sec": 0, 00:23:57.422 "w_mbytes_per_sec": 0 00:23:57.422 }, 00:23:57.422 "claimed": false, 00:23:57.422 "zoned": false, 00:23:57.422 "supported_io_types": { 00:23:57.422 "read": true, 00:23:57.422 "write": true, 00:23:57.422 "unmap": true, 00:23:57.422 "flush": true, 00:23:57.422 "reset": true, 00:23:57.422 "nvme_admin": false, 00:23:57.422 "nvme_io": false, 00:23:57.422 "nvme_io_md": false, 00:23:57.422 "write_zeroes": true, 00:23:57.422 "zcopy": true, 00:23:57.422 "get_zone_info": false, 00:23:57.422 "zone_management": false, 00:23:57.422 "zone_append": false, 00:23:57.422 "compare": false, 00:23:57.422 "compare_and_write": false, 00:23:57.422 "abort": true, 00:23:57.422 "seek_hole": false, 00:23:57.422 "seek_data": false, 00:23:57.422 "copy": true, 00:23:57.422 "nvme_iov_md": false 00:23:57.422 }, 00:23:57.422 "memory_domains": [ 00:23:57.422 { 00:23:57.422 "dma_device_id": "system", 00:23:57.422 "dma_device_type": 1 00:23:57.422 }, 00:23:57.422 { 00:23:57.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.422 "dma_device_type": 2 00:23:57.422 } 00:23:57.422 ], 00:23:57.422 "driver_specific": {} 00:23:57.422 } 00:23:57.422 ] 00:23:57.422 14:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:57.422 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:57.422 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:57.422 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:57.679 [2024-07-25 14:06:46.623460] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:57.679 [2024-07-25 14:06:46.623559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:57.679 [2024-07-25 14:06:46.623592] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:57.679 [2024-07-25 14:06:46.625729] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:57.679 [2024-07-25 14:06:46.625826] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:57.679 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.680 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.937 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.937 "name": "Existed_Raid", 00:23:57.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.937 "strip_size_kb": 64, 00:23:57.937 "state": "configuring", 00:23:57.937 "raid_level": "raid0", 00:23:57.937 "superblock": false, 00:23:57.937 "num_base_bdevs": 4, 00:23:57.937 "num_base_bdevs_discovered": 3, 00:23:57.937 "num_base_bdevs_operational": 4, 00:23:57.937 "base_bdevs_list": [ 00:23:57.937 { 00:23:57.937 "name": "BaseBdev1", 00:23:57.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.937 "is_configured": false, 00:23:57.937 "data_offset": 0, 00:23:57.937 "data_size": 0 00:23:57.937 }, 00:23:57.937 { 00:23:57.937 "name": "BaseBdev2", 00:23:57.937 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:23:57.937 "is_configured": true, 00:23:57.937 "data_offset": 0, 00:23:57.937 "data_size": 65536 00:23:57.937 }, 00:23:57.937 { 00:23:57.937 "name": "BaseBdev3", 00:23:57.937 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:23:57.937 "is_configured": true, 00:23:57.937 "data_offset": 0, 00:23:57.937 "data_size": 65536 00:23:57.937 }, 00:23:57.937 { 00:23:57.937 "name": "BaseBdev4", 00:23:57.937 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:23:57.937 "is_configured": true, 00:23:57.937 "data_offset": 0, 00:23:57.937 "data_size": 65536 00:23:57.937 } 00:23:57.937 ] 00:23:57.937 }' 00:23:57.937 14:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.937 14:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:58.870 [2024-07-25 14:06:47.834401] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.870 14:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.128 14:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.128 "name": "Existed_Raid", 00:23:59.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.128 "strip_size_kb": 64, 00:23:59.128 "state": "configuring", 00:23:59.128 "raid_level": "raid0", 00:23:59.128 "superblock": false, 00:23:59.128 "num_base_bdevs": 4, 00:23:59.128 "num_base_bdevs_discovered": 2, 00:23:59.128 "num_base_bdevs_operational": 4, 00:23:59.128 "base_bdevs_list": [ 00:23:59.128 { 00:23:59.128 "name": "BaseBdev1", 00:23:59.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.128 "is_configured": false, 00:23:59.128 "data_offset": 0, 00:23:59.128 "data_size": 0 00:23:59.128 }, 00:23:59.128 { 00:23:59.128 "name": null, 00:23:59.128 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:23:59.128 "is_configured": false, 00:23:59.128 "data_offset": 0, 00:23:59.128 "data_size": 65536 00:23:59.128 }, 00:23:59.128 { 00:23:59.128 "name": "BaseBdev3", 00:23:59.128 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:23:59.128 "is_configured": true, 00:23:59.128 "data_offset": 0, 00:23:59.128 "data_size": 65536 00:23:59.128 }, 00:23:59.128 { 00:23:59.128 "name": "BaseBdev4", 00:23:59.128 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:23:59.128 "is_configured": true, 00:23:59.128 "data_offset": 0, 00:23:59.128 "data_size": 65536 00:23:59.128 } 00:23:59.128 ] 00:23:59.128 }' 00:23:59.128 14:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.128 14:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.061 14:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:00.061 14:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.061 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:00.061 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:00.629 [2024-07-25 14:06:49.384796] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.629 BaseBdev1 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:00.629 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:01.238 [ 00:24:01.238 { 00:24:01.238 "name": "BaseBdev1", 00:24:01.238 "aliases": [ 00:24:01.238 "de855a5c-6865-4d4d-83f0-3fcf1a931cdd" 00:24:01.238 ], 00:24:01.238 "product_name": "Malloc disk", 00:24:01.238 "block_size": 512, 00:24:01.238 "num_blocks": 65536, 00:24:01.238 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:01.238 "assigned_rate_limits": { 00:24:01.238 "rw_ios_per_sec": 0, 00:24:01.238 "rw_mbytes_per_sec": 0, 00:24:01.238 "r_mbytes_per_sec": 0, 00:24:01.238 "w_mbytes_per_sec": 0 00:24:01.238 }, 00:24:01.238 "claimed": true, 00:24:01.238 "claim_type": "exclusive_write", 00:24:01.238 "zoned": false, 00:24:01.238 "supported_io_types": { 00:24:01.238 "read": true, 00:24:01.238 "write": true, 00:24:01.238 "unmap": true, 00:24:01.238 "flush": true, 00:24:01.238 "reset": true, 00:24:01.238 "nvme_admin": false, 00:24:01.238 "nvme_io": false, 00:24:01.238 "nvme_io_md": false, 00:24:01.238 "write_zeroes": true, 00:24:01.238 "zcopy": true, 00:24:01.238 "get_zone_info": false, 00:24:01.238 "zone_management": false, 00:24:01.238 "zone_append": false, 00:24:01.238 "compare": false, 00:24:01.238 "compare_and_write": false, 00:24:01.238 "abort": true, 00:24:01.238 "seek_hole": false, 00:24:01.238 "seek_data": false, 00:24:01.238 "copy": true, 00:24:01.238 "nvme_iov_md": false 00:24:01.238 }, 00:24:01.238 "memory_domains": [ 00:24:01.238 { 00:24:01.238 "dma_device_id": "system", 00:24:01.238 "dma_device_type": 1 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.238 "dma_device_type": 2 00:24:01.238 } 00:24:01.238 ], 00:24:01.238 "driver_specific": {} 00:24:01.238 } 00:24:01.238 ] 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.238 14:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.238 14:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.238 "name": "Existed_Raid", 00:24:01.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.238 "strip_size_kb": 64, 00:24:01.238 "state": "configuring", 00:24:01.238 "raid_level": "raid0", 00:24:01.238 "superblock": false, 00:24:01.238 "num_base_bdevs": 4, 00:24:01.238 "num_base_bdevs_discovered": 3, 00:24:01.238 "num_base_bdevs_operational": 4, 00:24:01.238 "base_bdevs_list": [ 00:24:01.238 { 00:24:01.238 "name": "BaseBdev1", 00:24:01.238 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:01.238 "is_configured": true, 00:24:01.238 "data_offset": 0, 00:24:01.238 "data_size": 65536 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": null, 00:24:01.238 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:01.238 "is_configured": false, 00:24:01.238 "data_offset": 0, 00:24:01.238 "data_size": 65536 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": "BaseBdev3", 00:24:01.238 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:01.238 "is_configured": true, 00:24:01.238 "data_offset": 0, 00:24:01.238 "data_size": 65536 00:24:01.238 }, 00:24:01.238 { 00:24:01.238 "name": "BaseBdev4", 00:24:01.238 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:01.238 "is_configured": true, 00:24:01.238 "data_offset": 0, 00:24:01.238 "data_size": 65536 00:24:01.238 } 00:24:01.238 ] 00:24:01.238 }' 00:24:01.238 14:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.238 14:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.172 14:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.172 14:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:02.430 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:02.430 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:02.688 [2024-07-25 14:06:51.509440] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.688 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.946 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.946 "name": "Existed_Raid", 00:24:02.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.946 "strip_size_kb": 64, 00:24:02.946 "state": "configuring", 00:24:02.946 "raid_level": "raid0", 00:24:02.946 "superblock": false, 00:24:02.946 "num_base_bdevs": 4, 00:24:02.946 "num_base_bdevs_discovered": 2, 00:24:02.946 "num_base_bdevs_operational": 4, 00:24:02.946 "base_bdevs_list": [ 00:24:02.946 { 00:24:02.946 "name": "BaseBdev1", 00:24:02.946 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:02.946 "is_configured": true, 00:24:02.946 "data_offset": 0, 00:24:02.946 "data_size": 65536 00:24:02.946 }, 00:24:02.946 { 00:24:02.946 "name": null, 00:24:02.946 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:02.946 "is_configured": false, 00:24:02.946 "data_offset": 0, 00:24:02.946 "data_size": 65536 00:24:02.946 }, 00:24:02.946 { 00:24:02.946 "name": null, 00:24:02.946 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:02.946 "is_configured": false, 00:24:02.946 "data_offset": 0, 00:24:02.946 "data_size": 65536 00:24:02.946 }, 00:24:02.946 { 00:24:02.946 "name": "BaseBdev4", 00:24:02.946 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:02.946 "is_configured": true, 00:24:02.946 "data_offset": 0, 00:24:02.946 "data_size": 65536 00:24:02.946 } 00:24:02.946 ] 00:24:02.946 }' 00:24:02.946 14:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.946 14:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.513 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.513 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:03.771 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:03.771 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:04.028 [2024-07-25 14:06:52.982122] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.028 14:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.028 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.028 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.286 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.286 "name": "Existed_Raid", 00:24:04.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.286 "strip_size_kb": 64, 00:24:04.286 "state": "configuring", 00:24:04.286 "raid_level": "raid0", 00:24:04.286 "superblock": false, 00:24:04.286 "num_base_bdevs": 4, 00:24:04.286 "num_base_bdevs_discovered": 3, 00:24:04.286 "num_base_bdevs_operational": 4, 00:24:04.286 "base_bdevs_list": [ 00:24:04.286 { 00:24:04.287 "name": "BaseBdev1", 00:24:04.287 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:04.287 "is_configured": true, 00:24:04.287 "data_offset": 0, 00:24:04.287 "data_size": 65536 00:24:04.287 }, 00:24:04.287 { 00:24:04.287 "name": null, 00:24:04.287 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:04.287 "is_configured": false, 00:24:04.287 "data_offset": 0, 00:24:04.287 "data_size": 65536 00:24:04.287 }, 00:24:04.287 { 00:24:04.287 "name": "BaseBdev3", 00:24:04.287 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:04.287 "is_configured": true, 00:24:04.287 "data_offset": 0, 00:24:04.287 "data_size": 65536 00:24:04.287 }, 00:24:04.287 { 00:24:04.287 "name": "BaseBdev4", 00:24:04.287 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:04.287 "is_configured": true, 00:24:04.287 "data_offset": 0, 00:24:04.287 "data_size": 65536 00:24:04.287 } 00:24:04.287 ] 00:24:04.287 }' 00:24:04.287 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.287 14:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.224 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.224 14:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:05.224 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:05.224 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:05.508 [2024-07-25 14:06:54.414092] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.508 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.766 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.766 "name": "Existed_Raid", 00:24:05.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.766 "strip_size_kb": 64, 00:24:05.766 "state": "configuring", 00:24:05.766 "raid_level": "raid0", 00:24:05.766 "superblock": false, 00:24:05.766 "num_base_bdevs": 4, 00:24:05.766 "num_base_bdevs_discovered": 2, 00:24:05.766 "num_base_bdevs_operational": 4, 00:24:05.767 "base_bdevs_list": [ 00:24:05.767 { 00:24:05.767 "name": null, 00:24:05.767 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:05.767 "is_configured": false, 00:24:05.767 "data_offset": 0, 00:24:05.767 "data_size": 65536 00:24:05.767 }, 00:24:05.767 { 00:24:05.767 "name": null, 00:24:05.767 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:05.767 "is_configured": false, 00:24:05.767 "data_offset": 0, 00:24:05.767 "data_size": 65536 00:24:05.767 }, 00:24:05.767 { 00:24:05.767 "name": "BaseBdev3", 00:24:05.767 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:05.767 "is_configured": true, 00:24:05.767 "data_offset": 0, 00:24:05.767 "data_size": 65536 00:24:05.767 }, 00:24:05.767 { 00:24:05.767 "name": "BaseBdev4", 00:24:05.767 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:05.767 "is_configured": true, 00:24:05.767 "data_offset": 0, 00:24:05.767 "data_size": 65536 00:24:05.767 } 00:24:05.767 ] 00:24:05.767 }' 00:24:05.767 14:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.767 14:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.701 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:06.701 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.701 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:06.701 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:06.960 [2024-07-25 14:06:55.963046] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.960 14:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.525 14:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.525 "name": "Existed_Raid", 00:24:07.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.525 "strip_size_kb": 64, 00:24:07.525 "state": "configuring", 00:24:07.525 "raid_level": "raid0", 00:24:07.525 "superblock": false, 00:24:07.525 "num_base_bdevs": 4, 00:24:07.525 "num_base_bdevs_discovered": 3, 00:24:07.525 "num_base_bdevs_operational": 4, 00:24:07.525 "base_bdevs_list": [ 00:24:07.525 { 00:24:07.525 "name": null, 00:24:07.525 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:07.525 "is_configured": false, 00:24:07.525 "data_offset": 0, 00:24:07.525 "data_size": 65536 00:24:07.525 }, 00:24:07.525 { 00:24:07.525 "name": "BaseBdev2", 00:24:07.525 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:07.525 "is_configured": true, 00:24:07.525 "data_offset": 0, 00:24:07.525 "data_size": 65536 00:24:07.525 }, 00:24:07.525 { 00:24:07.525 "name": "BaseBdev3", 00:24:07.525 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:07.525 "is_configured": true, 00:24:07.525 "data_offset": 0, 00:24:07.525 "data_size": 65536 00:24:07.525 }, 00:24:07.525 { 00:24:07.525 "name": "BaseBdev4", 00:24:07.525 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:07.525 "is_configured": true, 00:24:07.525 "data_offset": 0, 00:24:07.525 "data_size": 65536 00:24:07.525 } 00:24:07.525 ] 00:24:07.525 }' 00:24:07.525 14:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.525 14:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.091 14:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.091 14:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:08.348 14:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:08.348 14:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:08.348 14:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.606 14:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u de855a5c-6865-4d4d-83f0-3fcf1a931cdd 00:24:08.865 [2024-07-25 14:06:57.823355] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:08.865 [2024-07-25 14:06:57.823467] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:24:08.865 [2024-07-25 14:06:57.823478] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:08.865 [2024-07-25 14:06:57.823675] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:08.865 [2024-07-25 14:06:57.824072] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:24:08.865 [2024-07-25 14:06:57.824103] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:24:08.865 [2024-07-25 14:06:57.824368] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.865 NewBaseBdev 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:08.865 14:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:09.128 14:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:09.385 [ 00:24:09.385 { 00:24:09.385 "name": "NewBaseBdev", 00:24:09.385 "aliases": [ 00:24:09.385 "de855a5c-6865-4d4d-83f0-3fcf1a931cdd" 00:24:09.385 ], 00:24:09.385 "product_name": "Malloc disk", 00:24:09.385 "block_size": 512, 00:24:09.385 "num_blocks": 65536, 00:24:09.385 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:09.385 "assigned_rate_limits": { 00:24:09.385 "rw_ios_per_sec": 0, 00:24:09.385 "rw_mbytes_per_sec": 0, 00:24:09.385 "r_mbytes_per_sec": 0, 00:24:09.385 "w_mbytes_per_sec": 0 00:24:09.385 }, 00:24:09.385 "claimed": true, 00:24:09.385 "claim_type": "exclusive_write", 00:24:09.385 "zoned": false, 00:24:09.385 "supported_io_types": { 00:24:09.385 "read": true, 00:24:09.385 "write": true, 00:24:09.385 "unmap": true, 00:24:09.385 "flush": true, 00:24:09.385 "reset": true, 00:24:09.385 "nvme_admin": false, 00:24:09.385 "nvme_io": false, 00:24:09.385 "nvme_io_md": false, 00:24:09.385 "write_zeroes": true, 00:24:09.385 "zcopy": true, 00:24:09.385 "get_zone_info": false, 00:24:09.385 "zone_management": false, 00:24:09.385 "zone_append": false, 00:24:09.385 "compare": false, 00:24:09.385 "compare_and_write": false, 00:24:09.385 "abort": true, 00:24:09.385 "seek_hole": false, 00:24:09.385 "seek_data": false, 00:24:09.385 "copy": true, 00:24:09.385 "nvme_iov_md": false 00:24:09.385 }, 00:24:09.385 "memory_domains": [ 00:24:09.385 { 00:24:09.385 "dma_device_id": "system", 00:24:09.385 "dma_device_type": 1 00:24:09.385 }, 00:24:09.385 { 00:24:09.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.385 "dma_device_type": 2 00:24:09.385 } 00:24:09.385 ], 00:24:09.385 "driver_specific": {} 00:24:09.385 } 00:24:09.385 ] 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.385 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.643 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.643 "name": "Existed_Raid", 00:24:09.643 "uuid": "b71ed475-14a9-40f2-8234-4a37676d1ec7", 00:24:09.643 "strip_size_kb": 64, 00:24:09.643 "state": "online", 00:24:09.643 "raid_level": "raid0", 00:24:09.643 "superblock": false, 00:24:09.643 "num_base_bdevs": 4, 00:24:09.643 "num_base_bdevs_discovered": 4, 00:24:09.643 "num_base_bdevs_operational": 4, 00:24:09.643 "base_bdevs_list": [ 00:24:09.643 { 00:24:09.643 "name": "NewBaseBdev", 00:24:09.643 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:09.643 "is_configured": true, 00:24:09.643 "data_offset": 0, 00:24:09.643 "data_size": 65536 00:24:09.643 }, 00:24:09.643 { 00:24:09.643 "name": "BaseBdev2", 00:24:09.643 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:09.643 "is_configured": true, 00:24:09.643 "data_offset": 0, 00:24:09.643 "data_size": 65536 00:24:09.643 }, 00:24:09.643 { 00:24:09.643 "name": "BaseBdev3", 00:24:09.643 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:09.643 "is_configured": true, 00:24:09.643 "data_offset": 0, 00:24:09.643 "data_size": 65536 00:24:09.643 }, 00:24:09.643 { 00:24:09.643 "name": "BaseBdev4", 00:24:09.643 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:09.643 "is_configured": true, 00:24:09.643 "data_offset": 0, 00:24:09.643 "data_size": 65536 00:24:09.643 } 00:24:09.643 ] 00:24:09.643 }' 00:24:09.643 14:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.643 14:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:10.574 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:10.830 [2024-07-25 14:06:59.632332] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.830 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:10.830 "name": "Existed_Raid", 00:24:10.830 "aliases": [ 00:24:10.831 "b71ed475-14a9-40f2-8234-4a37676d1ec7" 00:24:10.831 ], 00:24:10.831 "product_name": "Raid Volume", 00:24:10.831 "block_size": 512, 00:24:10.831 "num_blocks": 262144, 00:24:10.831 "uuid": "b71ed475-14a9-40f2-8234-4a37676d1ec7", 00:24:10.831 "assigned_rate_limits": { 00:24:10.831 "rw_ios_per_sec": 0, 00:24:10.831 "rw_mbytes_per_sec": 0, 00:24:10.831 "r_mbytes_per_sec": 0, 00:24:10.831 "w_mbytes_per_sec": 0 00:24:10.831 }, 00:24:10.831 "claimed": false, 00:24:10.831 "zoned": false, 00:24:10.831 "supported_io_types": { 00:24:10.831 "read": true, 00:24:10.831 "write": true, 00:24:10.831 "unmap": true, 00:24:10.831 "flush": true, 00:24:10.831 "reset": true, 00:24:10.831 "nvme_admin": false, 00:24:10.831 "nvme_io": false, 00:24:10.831 "nvme_io_md": false, 00:24:10.831 "write_zeroes": true, 00:24:10.831 "zcopy": false, 00:24:10.831 "get_zone_info": false, 00:24:10.831 "zone_management": false, 00:24:10.831 "zone_append": false, 00:24:10.831 "compare": false, 00:24:10.831 "compare_and_write": false, 00:24:10.831 "abort": false, 00:24:10.831 "seek_hole": false, 00:24:10.831 "seek_data": false, 00:24:10.831 "copy": false, 00:24:10.831 "nvme_iov_md": false 00:24:10.831 }, 00:24:10.831 "memory_domains": [ 00:24:10.831 { 00:24:10.831 "dma_device_id": "system", 00:24:10.831 "dma_device_type": 1 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.831 "dma_device_type": 2 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "system", 00:24:10.831 "dma_device_type": 1 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.831 "dma_device_type": 2 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "system", 00:24:10.831 "dma_device_type": 1 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.831 "dma_device_type": 2 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "system", 00:24:10.831 "dma_device_type": 1 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.831 "dma_device_type": 2 00:24:10.831 } 00:24:10.831 ], 00:24:10.831 "driver_specific": { 00:24:10.831 "raid": { 00:24:10.831 "uuid": "b71ed475-14a9-40f2-8234-4a37676d1ec7", 00:24:10.831 "strip_size_kb": 64, 00:24:10.831 "state": "online", 00:24:10.831 "raid_level": "raid0", 00:24:10.831 "superblock": false, 00:24:10.831 "num_base_bdevs": 4, 00:24:10.831 "num_base_bdevs_discovered": 4, 00:24:10.831 "num_base_bdevs_operational": 4, 00:24:10.831 "base_bdevs_list": [ 00:24:10.831 { 00:24:10.831 "name": "NewBaseBdev", 00:24:10.831 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:10.831 "is_configured": true, 00:24:10.831 "data_offset": 0, 00:24:10.831 "data_size": 65536 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "name": "BaseBdev2", 00:24:10.831 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:10.831 "is_configured": true, 00:24:10.831 "data_offset": 0, 00:24:10.831 "data_size": 65536 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "name": "BaseBdev3", 00:24:10.831 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:10.831 "is_configured": true, 00:24:10.831 "data_offset": 0, 00:24:10.831 "data_size": 65536 00:24:10.831 }, 00:24:10.831 { 00:24:10.831 "name": "BaseBdev4", 00:24:10.831 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:10.831 "is_configured": true, 00:24:10.831 "data_offset": 0, 00:24:10.831 "data_size": 65536 00:24:10.831 } 00:24:10.831 ] 00:24:10.831 } 00:24:10.831 } 00:24:10.831 }' 00:24:10.831 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:10.831 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:10.831 BaseBdev2 00:24:10.831 BaseBdev3 00:24:10.831 BaseBdev4' 00:24:10.831 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:10.831 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:10.831 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.088 14:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.088 "name": "NewBaseBdev", 00:24:11.088 "aliases": [ 00:24:11.088 "de855a5c-6865-4d4d-83f0-3fcf1a931cdd" 00:24:11.088 ], 00:24:11.088 "product_name": "Malloc disk", 00:24:11.088 "block_size": 512, 00:24:11.088 "num_blocks": 65536, 00:24:11.088 "uuid": "de855a5c-6865-4d4d-83f0-3fcf1a931cdd", 00:24:11.088 "assigned_rate_limits": { 00:24:11.088 "rw_ios_per_sec": 0, 00:24:11.088 "rw_mbytes_per_sec": 0, 00:24:11.088 "r_mbytes_per_sec": 0, 00:24:11.088 "w_mbytes_per_sec": 0 00:24:11.088 }, 00:24:11.088 "claimed": true, 00:24:11.088 "claim_type": "exclusive_write", 00:24:11.088 "zoned": false, 00:24:11.088 "supported_io_types": { 00:24:11.088 "read": true, 00:24:11.088 "write": true, 00:24:11.088 "unmap": true, 00:24:11.088 "flush": true, 00:24:11.088 "reset": true, 00:24:11.088 "nvme_admin": false, 00:24:11.088 "nvme_io": false, 00:24:11.088 "nvme_io_md": false, 00:24:11.088 "write_zeroes": true, 00:24:11.088 "zcopy": true, 00:24:11.088 "get_zone_info": false, 00:24:11.088 "zone_management": false, 00:24:11.088 "zone_append": false, 00:24:11.088 "compare": false, 00:24:11.088 "compare_and_write": false, 00:24:11.088 "abort": true, 00:24:11.088 "seek_hole": false, 00:24:11.088 "seek_data": false, 00:24:11.088 "copy": true, 00:24:11.088 "nvme_iov_md": false 00:24:11.088 }, 00:24:11.088 "memory_domains": [ 00:24:11.088 { 00:24:11.088 "dma_device_id": "system", 00:24:11.088 "dma_device_type": 1 00:24:11.088 }, 00:24:11.088 { 00:24:11.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.088 "dma_device_type": 2 00:24:11.088 } 00:24:11.088 ], 00:24:11.088 "driver_specific": {} 00:24:11.088 }' 00:24:11.088 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.088 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.088 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.088 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.344 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:11.601 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:11.601 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:11.601 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:11.601 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:11.859 "name": "BaseBdev2", 00:24:11.859 "aliases": [ 00:24:11.859 "8e8a4936-80b3-48bb-af85-7933ebbe2c2c" 00:24:11.859 ], 00:24:11.859 "product_name": "Malloc disk", 00:24:11.859 "block_size": 512, 00:24:11.859 "num_blocks": 65536, 00:24:11.859 "uuid": "8e8a4936-80b3-48bb-af85-7933ebbe2c2c", 00:24:11.859 "assigned_rate_limits": { 00:24:11.859 "rw_ios_per_sec": 0, 00:24:11.859 "rw_mbytes_per_sec": 0, 00:24:11.859 "r_mbytes_per_sec": 0, 00:24:11.859 "w_mbytes_per_sec": 0 00:24:11.859 }, 00:24:11.859 "claimed": true, 00:24:11.859 "claim_type": "exclusive_write", 00:24:11.859 "zoned": false, 00:24:11.859 "supported_io_types": { 00:24:11.859 "read": true, 00:24:11.859 "write": true, 00:24:11.859 "unmap": true, 00:24:11.859 "flush": true, 00:24:11.859 "reset": true, 00:24:11.859 "nvme_admin": false, 00:24:11.859 "nvme_io": false, 00:24:11.859 "nvme_io_md": false, 00:24:11.859 "write_zeroes": true, 00:24:11.859 "zcopy": true, 00:24:11.859 "get_zone_info": false, 00:24:11.859 "zone_management": false, 00:24:11.859 "zone_append": false, 00:24:11.859 "compare": false, 00:24:11.859 "compare_and_write": false, 00:24:11.859 "abort": true, 00:24:11.859 "seek_hole": false, 00:24:11.859 "seek_data": false, 00:24:11.859 "copy": true, 00:24:11.859 "nvme_iov_md": false 00:24:11.859 }, 00:24:11.859 "memory_domains": [ 00:24:11.859 { 00:24:11.859 "dma_device_id": "system", 00:24:11.859 "dma_device_type": 1 00:24:11.859 }, 00:24:11.859 { 00:24:11.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.859 "dma_device_type": 2 00:24:11.859 } 00:24:11.859 ], 00:24:11.859 "driver_specific": {} 00:24:11.859 }' 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:11.859 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.115 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:12.115 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.115 14:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.115 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:12.115 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.115 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.116 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:12.116 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:12.116 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:12.116 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:12.372 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:12.373 "name": "BaseBdev3", 00:24:12.373 "aliases": [ 00:24:12.373 "7cbe9c71-b90d-49be-8e91-fa673b60a492" 00:24:12.373 ], 00:24:12.373 "product_name": "Malloc disk", 00:24:12.373 "block_size": 512, 00:24:12.373 "num_blocks": 65536, 00:24:12.373 "uuid": "7cbe9c71-b90d-49be-8e91-fa673b60a492", 00:24:12.373 "assigned_rate_limits": { 00:24:12.373 "rw_ios_per_sec": 0, 00:24:12.373 "rw_mbytes_per_sec": 0, 00:24:12.373 "r_mbytes_per_sec": 0, 00:24:12.373 "w_mbytes_per_sec": 0 00:24:12.373 }, 00:24:12.373 "claimed": true, 00:24:12.373 "claim_type": "exclusive_write", 00:24:12.373 "zoned": false, 00:24:12.373 "supported_io_types": { 00:24:12.373 "read": true, 00:24:12.373 "write": true, 00:24:12.373 "unmap": true, 00:24:12.373 "flush": true, 00:24:12.373 "reset": true, 00:24:12.373 "nvme_admin": false, 00:24:12.373 "nvme_io": false, 00:24:12.373 "nvme_io_md": false, 00:24:12.373 "write_zeroes": true, 00:24:12.373 "zcopy": true, 00:24:12.373 "get_zone_info": false, 00:24:12.373 "zone_management": false, 00:24:12.373 "zone_append": false, 00:24:12.373 "compare": false, 00:24:12.373 "compare_and_write": false, 00:24:12.373 "abort": true, 00:24:12.373 "seek_hole": false, 00:24:12.373 "seek_data": false, 00:24:12.373 "copy": true, 00:24:12.373 "nvme_iov_md": false 00:24:12.373 }, 00:24:12.373 "memory_domains": [ 00:24:12.373 { 00:24:12.373 "dma_device_id": "system", 00:24:12.373 "dma_device_type": 1 00:24:12.373 }, 00:24:12.373 { 00:24:12.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.373 "dma_device_type": 2 00:24:12.373 } 00:24:12.373 ], 00:24:12.373 "driver_specific": {} 00:24:12.373 }' 00:24:12.373 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.373 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:12.631 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.890 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.890 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:12.890 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:12.890 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:12.890 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:13.148 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:13.148 "name": "BaseBdev4", 00:24:13.148 "aliases": [ 00:24:13.148 "2ecf88bd-ed29-49ea-9c07-93a89fdf39be" 00:24:13.148 ], 00:24:13.148 "product_name": "Malloc disk", 00:24:13.148 "block_size": 512, 00:24:13.148 "num_blocks": 65536, 00:24:13.148 "uuid": "2ecf88bd-ed29-49ea-9c07-93a89fdf39be", 00:24:13.148 "assigned_rate_limits": { 00:24:13.148 "rw_ios_per_sec": 0, 00:24:13.148 "rw_mbytes_per_sec": 0, 00:24:13.148 "r_mbytes_per_sec": 0, 00:24:13.148 "w_mbytes_per_sec": 0 00:24:13.148 }, 00:24:13.148 "claimed": true, 00:24:13.148 "claim_type": "exclusive_write", 00:24:13.148 "zoned": false, 00:24:13.148 "supported_io_types": { 00:24:13.148 "read": true, 00:24:13.148 "write": true, 00:24:13.148 "unmap": true, 00:24:13.148 "flush": true, 00:24:13.148 "reset": true, 00:24:13.148 "nvme_admin": false, 00:24:13.148 "nvme_io": false, 00:24:13.148 "nvme_io_md": false, 00:24:13.148 "write_zeroes": true, 00:24:13.148 "zcopy": true, 00:24:13.148 "get_zone_info": false, 00:24:13.148 "zone_management": false, 00:24:13.148 "zone_append": false, 00:24:13.148 "compare": false, 00:24:13.148 "compare_and_write": false, 00:24:13.148 "abort": true, 00:24:13.148 "seek_hole": false, 00:24:13.148 "seek_data": false, 00:24:13.148 "copy": true, 00:24:13.148 "nvme_iov_md": false 00:24:13.148 }, 00:24:13.148 "memory_domains": [ 00:24:13.148 { 00:24:13.148 "dma_device_id": "system", 00:24:13.148 "dma_device_type": 1 00:24:13.148 }, 00:24:13.148 { 00:24:13.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.148 "dma_device_type": 2 00:24:13.148 } 00:24:13.148 ], 00:24:13.148 "driver_specific": {} 00:24:13.148 }' 00:24:13.148 14:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:13.148 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:13.148 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:13.148 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.148 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:13.406 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:13.663 [2024-07-25 14:07:02.646547] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.663 [2024-07-25 14:07:02.646603] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:13.663 [2024-07-25 14:07:02.646715] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:13.663 [2024-07-25 14:07:02.646800] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:13.663 [2024-07-25 14:07:02.646814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 134143 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 134143 ']' 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 134143 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134143 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134143' 00:24:13.663 killing process with pid 134143 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 134143 00:24:13.663 [2024-07-25 14:07:02.690311] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:13.663 14:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 134143 00:24:14.227 [2024-07-25 14:07:03.028919] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:15.160 14:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:15.160 00:24:15.160 real 0m38.657s 00:24:15.160 user 1m12.189s 00:24:15.160 sys 0m4.360s 00:24:15.160 14:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.160 ************************************ 00:24:15.160 END TEST raid_state_function_test 00:24:15.160 14:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.160 ************************************ 00:24:15.160 14:07:04 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:15.160 14:07:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:15.160 14:07:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.160 14:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:15.418 ************************************ 00:24:15.418 START TEST raid_state_function_test_sb 00:24:15.418 ************************************ 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=135296 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135296' 00:24:15.418 Process raid pid: 135296 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 135296 /var/tmp/spdk-raid.sock 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 135296 ']' 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.418 14:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.418 [2024-07-25 14:07:04.278858] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:24:15.418 [2024-07-25 14:07:04.279110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.418 [2024-07-25 14:07:04.454322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.676 [2024-07-25 14:07:04.705368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.934 [2024-07-25 14:07:04.910697] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:16.191 14:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.191 14:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:16.191 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:16.449 [2024-07-25 14:07:05.461610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:16.449 [2024-07-25 14:07:05.461739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:16.449 [2024-07-25 14:07:05.461771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:16.449 [2024-07-25 14:07:05.461813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:16.449 [2024-07-25 14:07:05.461835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:16.449 [2024-07-25 14:07:05.461855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:16.449 [2024-07-25 14:07:05.461863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:16.450 [2024-07-25 14:07:05.461888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.450 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.708 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.708 "name": "Existed_Raid", 00:24:16.708 "uuid": "f500b984-7ec3-48d8-86ed-819b313068bc", 00:24:16.708 "strip_size_kb": 64, 00:24:16.708 "state": "configuring", 00:24:16.708 "raid_level": "raid0", 00:24:16.708 "superblock": true, 00:24:16.708 "num_base_bdevs": 4, 00:24:16.708 "num_base_bdevs_discovered": 0, 00:24:16.708 "num_base_bdevs_operational": 4, 00:24:16.708 "base_bdevs_list": [ 00:24:16.708 { 00:24:16.708 "name": "BaseBdev1", 00:24:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.708 "is_configured": false, 00:24:16.708 "data_offset": 0, 00:24:16.708 "data_size": 0 00:24:16.708 }, 00:24:16.708 { 00:24:16.708 "name": "BaseBdev2", 00:24:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.708 "is_configured": false, 00:24:16.708 "data_offset": 0, 00:24:16.708 "data_size": 0 00:24:16.708 }, 00:24:16.708 { 00:24:16.708 "name": "BaseBdev3", 00:24:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.708 "is_configured": false, 00:24:16.708 "data_offset": 0, 00:24:16.708 "data_size": 0 00:24:16.708 }, 00:24:16.708 { 00:24:16.708 "name": "BaseBdev4", 00:24:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.708 "is_configured": false, 00:24:16.708 "data_offset": 0, 00:24:16.708 "data_size": 0 00:24:16.708 } 00:24:16.708 ] 00:24:16.708 }' 00:24:16.708 14:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.708 14:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.640 14:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:17.897 [2024-07-25 14:07:06.697733] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:17.897 [2024-07-25 14:07:06.697821] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:24:17.897 14:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:17.897 [2024-07-25 14:07:06.929823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.897 [2024-07-25 14:07:06.929913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.897 [2024-07-25 14:07:06.929928] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:17.897 [2024-07-25 14:07:06.929988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:17.897 [2024-07-25 14:07:06.930000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:17.897 [2024-07-25 14:07:06.930039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:17.897 [2024-07-25 14:07:06.930049] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:17.897 [2024-07-25 14:07:06.930075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:18.155 14:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:18.412 [2024-07-25 14:07:07.206192] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.412 BaseBdev1 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:18.412 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.669 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:18.928 [ 00:24:18.928 { 00:24:18.928 "name": "BaseBdev1", 00:24:18.928 "aliases": [ 00:24:18.928 "3fef2505-7717-444f-b820-5bedd83ebf26" 00:24:18.928 ], 00:24:18.928 "product_name": "Malloc disk", 00:24:18.928 "block_size": 512, 00:24:18.928 "num_blocks": 65536, 00:24:18.928 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:18.928 "assigned_rate_limits": { 00:24:18.928 "rw_ios_per_sec": 0, 00:24:18.928 "rw_mbytes_per_sec": 0, 00:24:18.928 "r_mbytes_per_sec": 0, 00:24:18.928 "w_mbytes_per_sec": 0 00:24:18.928 }, 00:24:18.928 "claimed": true, 00:24:18.928 "claim_type": "exclusive_write", 00:24:18.928 "zoned": false, 00:24:18.928 "supported_io_types": { 00:24:18.928 "read": true, 00:24:18.928 "write": true, 00:24:18.928 "unmap": true, 00:24:18.928 "flush": true, 00:24:18.928 "reset": true, 00:24:18.928 "nvme_admin": false, 00:24:18.928 "nvme_io": false, 00:24:18.928 "nvme_io_md": false, 00:24:18.928 "write_zeroes": true, 00:24:18.928 "zcopy": true, 00:24:18.928 "get_zone_info": false, 00:24:18.928 "zone_management": false, 00:24:18.928 "zone_append": false, 00:24:18.928 "compare": false, 00:24:18.928 "compare_and_write": false, 00:24:18.928 "abort": true, 00:24:18.928 "seek_hole": false, 00:24:18.928 "seek_data": false, 00:24:18.928 "copy": true, 00:24:18.928 "nvme_iov_md": false 00:24:18.928 }, 00:24:18.928 "memory_domains": [ 00:24:18.928 { 00:24:18.928 "dma_device_id": "system", 00:24:18.928 "dma_device_type": 1 00:24:18.928 }, 00:24:18.928 { 00:24:18.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.928 "dma_device_type": 2 00:24:18.928 } 00:24:18.928 ], 00:24:18.928 "driver_specific": {} 00:24:18.928 } 00:24:18.928 ] 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.928 14:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.186 14:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.186 "name": "Existed_Raid", 00:24:19.186 "uuid": "2c0d7da1-7360-4bb4-95e6-0a960334f090", 00:24:19.186 "strip_size_kb": 64, 00:24:19.186 "state": "configuring", 00:24:19.186 "raid_level": "raid0", 00:24:19.186 "superblock": true, 00:24:19.186 "num_base_bdevs": 4, 00:24:19.186 "num_base_bdevs_discovered": 1, 00:24:19.186 "num_base_bdevs_operational": 4, 00:24:19.186 "base_bdevs_list": [ 00:24:19.186 { 00:24:19.186 "name": "BaseBdev1", 00:24:19.186 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:19.186 "is_configured": true, 00:24:19.186 "data_offset": 2048, 00:24:19.186 "data_size": 63488 00:24:19.186 }, 00:24:19.186 { 00:24:19.186 "name": "BaseBdev2", 00:24:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.186 "is_configured": false, 00:24:19.186 "data_offset": 0, 00:24:19.186 "data_size": 0 00:24:19.186 }, 00:24:19.186 { 00:24:19.186 "name": "BaseBdev3", 00:24:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.186 "is_configured": false, 00:24:19.186 "data_offset": 0, 00:24:19.186 "data_size": 0 00:24:19.186 }, 00:24:19.186 { 00:24:19.186 "name": "BaseBdev4", 00:24:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.186 "is_configured": false, 00:24:19.186 "data_offset": 0, 00:24:19.186 "data_size": 0 00:24:19.186 } 00:24:19.186 ] 00:24:19.186 }' 00:24:19.186 14:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.186 14:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.763 14:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:20.037 [2024-07-25 14:07:08.898603] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:20.037 [2024-07-25 14:07:08.898695] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:24:20.037 14:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:20.295 [2024-07-25 14:07:09.146716] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:20.295 [2024-07-25 14:07:09.149016] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:20.295 [2024-07-25 14:07:09.149103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:20.295 [2024-07-25 14:07:09.149133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:20.295 [2024-07-25 14:07:09.149162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:20.295 [2024-07-25 14:07:09.149173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:20.295 [2024-07-25 14:07:09.149192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.295 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.552 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.552 "name": "Existed_Raid", 00:24:20.552 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:20.552 "strip_size_kb": 64, 00:24:20.552 "state": "configuring", 00:24:20.552 "raid_level": "raid0", 00:24:20.552 "superblock": true, 00:24:20.552 "num_base_bdevs": 4, 00:24:20.552 "num_base_bdevs_discovered": 1, 00:24:20.552 "num_base_bdevs_operational": 4, 00:24:20.552 "base_bdevs_list": [ 00:24:20.552 { 00:24:20.552 "name": "BaseBdev1", 00:24:20.552 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:20.552 "is_configured": true, 00:24:20.552 "data_offset": 2048, 00:24:20.552 "data_size": 63488 00:24:20.552 }, 00:24:20.552 { 00:24:20.552 "name": "BaseBdev2", 00:24:20.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.552 "is_configured": false, 00:24:20.552 "data_offset": 0, 00:24:20.552 "data_size": 0 00:24:20.552 }, 00:24:20.552 { 00:24:20.552 "name": "BaseBdev3", 00:24:20.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.552 "is_configured": false, 00:24:20.552 "data_offset": 0, 00:24:20.552 "data_size": 0 00:24:20.552 }, 00:24:20.552 { 00:24:20.552 "name": "BaseBdev4", 00:24:20.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.552 "is_configured": false, 00:24:20.552 "data_offset": 0, 00:24:20.552 "data_size": 0 00:24:20.552 } 00:24:20.552 ] 00:24:20.552 }' 00:24:20.552 14:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.552 14:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.117 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:21.374 [2024-07-25 14:07:10.369051] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.374 BaseBdev2 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:21.374 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.632 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:21.890 [ 00:24:21.890 { 00:24:21.890 "name": "BaseBdev2", 00:24:21.890 "aliases": [ 00:24:21.890 "f82f77ba-1204-4026-a6b4-2932ce0f7978" 00:24:21.890 ], 00:24:21.890 "product_name": "Malloc disk", 00:24:21.890 "block_size": 512, 00:24:21.890 "num_blocks": 65536, 00:24:21.890 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:21.890 "assigned_rate_limits": { 00:24:21.890 "rw_ios_per_sec": 0, 00:24:21.890 "rw_mbytes_per_sec": 0, 00:24:21.890 "r_mbytes_per_sec": 0, 00:24:21.890 "w_mbytes_per_sec": 0 00:24:21.890 }, 00:24:21.890 "claimed": true, 00:24:21.890 "claim_type": "exclusive_write", 00:24:21.890 "zoned": false, 00:24:21.890 "supported_io_types": { 00:24:21.890 "read": true, 00:24:21.890 "write": true, 00:24:21.890 "unmap": true, 00:24:21.890 "flush": true, 00:24:21.890 "reset": true, 00:24:21.890 "nvme_admin": false, 00:24:21.890 "nvme_io": false, 00:24:21.890 "nvme_io_md": false, 00:24:21.890 "write_zeroes": true, 00:24:21.890 "zcopy": true, 00:24:21.890 "get_zone_info": false, 00:24:21.890 "zone_management": false, 00:24:21.890 "zone_append": false, 00:24:21.890 "compare": false, 00:24:21.890 "compare_and_write": false, 00:24:21.890 "abort": true, 00:24:21.890 "seek_hole": false, 00:24:21.890 "seek_data": false, 00:24:21.890 "copy": true, 00:24:21.890 "nvme_iov_md": false 00:24:21.890 }, 00:24:21.890 "memory_domains": [ 00:24:21.890 { 00:24:21.890 "dma_device_id": "system", 00:24:21.890 "dma_device_type": 1 00:24:21.890 }, 00:24:21.890 { 00:24:21.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.890 "dma_device_type": 2 00:24:21.890 } 00:24:21.890 ], 00:24:21.890 "driver_specific": {} 00:24:21.890 } 00:24:21.890 ] 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.890 14:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.456 14:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.456 "name": "Existed_Raid", 00:24:22.456 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:22.456 "strip_size_kb": 64, 00:24:22.456 "state": "configuring", 00:24:22.456 "raid_level": "raid0", 00:24:22.456 "superblock": true, 00:24:22.456 "num_base_bdevs": 4, 00:24:22.456 "num_base_bdevs_discovered": 2, 00:24:22.456 "num_base_bdevs_operational": 4, 00:24:22.456 "base_bdevs_list": [ 00:24:22.456 { 00:24:22.456 "name": "BaseBdev1", 00:24:22.456 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:22.456 "is_configured": true, 00:24:22.456 "data_offset": 2048, 00:24:22.456 "data_size": 63488 00:24:22.456 }, 00:24:22.456 { 00:24:22.456 "name": "BaseBdev2", 00:24:22.456 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:22.456 "is_configured": true, 00:24:22.456 "data_offset": 2048, 00:24:22.456 "data_size": 63488 00:24:22.456 }, 00:24:22.456 { 00:24:22.456 "name": "BaseBdev3", 00:24:22.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.456 "is_configured": false, 00:24:22.456 "data_offset": 0, 00:24:22.456 "data_size": 0 00:24:22.456 }, 00:24:22.456 { 00:24:22.456 "name": "BaseBdev4", 00:24:22.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.456 "is_configured": false, 00:24:22.456 "data_offset": 0, 00:24:22.456 "data_size": 0 00:24:22.456 } 00:24:22.456 ] 00:24:22.456 }' 00:24:22.456 14:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.456 14:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.023 14:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:23.280 [2024-07-25 14:07:12.152387] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:23.280 BaseBdev3 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:23.280 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.539 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:23.796 [ 00:24:23.796 { 00:24:23.796 "name": "BaseBdev3", 00:24:23.796 "aliases": [ 00:24:23.796 "aaa43ce2-c851-4d1b-89c2-0702a3acbfde" 00:24:23.796 ], 00:24:23.796 "product_name": "Malloc disk", 00:24:23.796 "block_size": 512, 00:24:23.796 "num_blocks": 65536, 00:24:23.796 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:23.796 "assigned_rate_limits": { 00:24:23.796 "rw_ios_per_sec": 0, 00:24:23.796 "rw_mbytes_per_sec": 0, 00:24:23.796 "r_mbytes_per_sec": 0, 00:24:23.796 "w_mbytes_per_sec": 0 00:24:23.796 }, 00:24:23.796 "claimed": true, 00:24:23.796 "claim_type": "exclusive_write", 00:24:23.796 "zoned": false, 00:24:23.796 "supported_io_types": { 00:24:23.796 "read": true, 00:24:23.796 "write": true, 00:24:23.796 "unmap": true, 00:24:23.796 "flush": true, 00:24:23.796 "reset": true, 00:24:23.796 "nvme_admin": false, 00:24:23.796 "nvme_io": false, 00:24:23.796 "nvme_io_md": false, 00:24:23.796 "write_zeroes": true, 00:24:23.796 "zcopy": true, 00:24:23.796 "get_zone_info": false, 00:24:23.797 "zone_management": false, 00:24:23.797 "zone_append": false, 00:24:23.797 "compare": false, 00:24:23.797 "compare_and_write": false, 00:24:23.797 "abort": true, 00:24:23.797 "seek_hole": false, 00:24:23.797 "seek_data": false, 00:24:23.797 "copy": true, 00:24:23.797 "nvme_iov_md": false 00:24:23.797 }, 00:24:23.797 "memory_domains": [ 00:24:23.797 { 00:24:23.797 "dma_device_id": "system", 00:24:23.797 "dma_device_type": 1 00:24:23.797 }, 00:24:23.797 { 00:24:23.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.797 "dma_device_type": 2 00:24:23.797 } 00:24:23.797 ], 00:24:23.797 "driver_specific": {} 00:24:23.797 } 00:24:23.797 ] 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.797 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.055 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.055 "name": "Existed_Raid", 00:24:24.055 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:24.055 "strip_size_kb": 64, 00:24:24.055 "state": "configuring", 00:24:24.055 "raid_level": "raid0", 00:24:24.055 "superblock": true, 00:24:24.055 "num_base_bdevs": 4, 00:24:24.055 "num_base_bdevs_discovered": 3, 00:24:24.055 "num_base_bdevs_operational": 4, 00:24:24.055 "base_bdevs_list": [ 00:24:24.055 { 00:24:24.055 "name": "BaseBdev1", 00:24:24.055 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:24.055 "is_configured": true, 00:24:24.055 "data_offset": 2048, 00:24:24.055 "data_size": 63488 00:24:24.055 }, 00:24:24.055 { 00:24:24.055 "name": "BaseBdev2", 00:24:24.055 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:24.055 "is_configured": true, 00:24:24.055 "data_offset": 2048, 00:24:24.055 "data_size": 63488 00:24:24.055 }, 00:24:24.055 { 00:24:24.055 "name": "BaseBdev3", 00:24:24.055 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:24.055 "is_configured": true, 00:24:24.055 "data_offset": 2048, 00:24:24.055 "data_size": 63488 00:24:24.055 }, 00:24:24.055 { 00:24:24.055 "name": "BaseBdev4", 00:24:24.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.055 "is_configured": false, 00:24:24.055 "data_offset": 0, 00:24:24.055 "data_size": 0 00:24:24.055 } 00:24:24.055 ] 00:24:24.055 }' 00:24:24.055 14:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.055 14:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.620 14:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:24.878 [2024-07-25 14:07:13.892722] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:24.878 [2024-07-25 14:07:13.893032] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:24:24.878 [2024-07-25 14:07:13.893049] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:24.878 [2024-07-25 14:07:13.893197] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:24.878 [2024-07-25 14:07:13.893590] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:24:24.878 [2024-07-25 14:07:13.893618] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:24:24.878 [2024-07-25 14:07:13.893773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.878 BaseBdev4 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:24.878 14:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.136 14:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:25.393 [ 00:24:25.393 { 00:24:25.393 "name": "BaseBdev4", 00:24:25.393 "aliases": [ 00:24:25.393 "26eba74f-df98-4521-abaf-0892aaba308e" 00:24:25.393 ], 00:24:25.393 "product_name": "Malloc disk", 00:24:25.393 "block_size": 512, 00:24:25.393 "num_blocks": 65536, 00:24:25.393 "uuid": "26eba74f-df98-4521-abaf-0892aaba308e", 00:24:25.393 "assigned_rate_limits": { 00:24:25.393 "rw_ios_per_sec": 0, 00:24:25.393 "rw_mbytes_per_sec": 0, 00:24:25.393 "r_mbytes_per_sec": 0, 00:24:25.393 "w_mbytes_per_sec": 0 00:24:25.393 }, 00:24:25.393 "claimed": true, 00:24:25.393 "claim_type": "exclusive_write", 00:24:25.393 "zoned": false, 00:24:25.393 "supported_io_types": { 00:24:25.393 "read": true, 00:24:25.393 "write": true, 00:24:25.393 "unmap": true, 00:24:25.393 "flush": true, 00:24:25.393 "reset": true, 00:24:25.393 "nvme_admin": false, 00:24:25.393 "nvme_io": false, 00:24:25.393 "nvme_io_md": false, 00:24:25.393 "write_zeroes": true, 00:24:25.393 "zcopy": true, 00:24:25.393 "get_zone_info": false, 00:24:25.393 "zone_management": false, 00:24:25.393 "zone_append": false, 00:24:25.393 "compare": false, 00:24:25.393 "compare_and_write": false, 00:24:25.393 "abort": true, 00:24:25.393 "seek_hole": false, 00:24:25.393 "seek_data": false, 00:24:25.393 "copy": true, 00:24:25.393 "nvme_iov_md": false 00:24:25.393 }, 00:24:25.393 "memory_domains": [ 00:24:25.393 { 00:24:25.393 "dma_device_id": "system", 00:24:25.393 "dma_device_type": 1 00:24:25.393 }, 00:24:25.393 { 00:24:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.393 "dma_device_type": 2 00:24:25.393 } 00:24:25.394 ], 00:24:25.394 "driver_specific": {} 00:24:25.394 } 00:24:25.394 ] 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.394 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.651 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.651 "name": "Existed_Raid", 00:24:25.651 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:25.651 "strip_size_kb": 64, 00:24:25.651 "state": "online", 00:24:25.651 "raid_level": "raid0", 00:24:25.651 "superblock": true, 00:24:25.651 "num_base_bdevs": 4, 00:24:25.651 "num_base_bdevs_discovered": 4, 00:24:25.651 "num_base_bdevs_operational": 4, 00:24:25.651 "base_bdevs_list": [ 00:24:25.651 { 00:24:25.651 "name": "BaseBdev1", 00:24:25.651 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": "BaseBdev2", 00:24:25.651 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": "BaseBdev3", 00:24:25.651 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": "BaseBdev4", 00:24:25.651 "uuid": "26eba74f-df98-4521-abaf-0892aaba308e", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 } 00:24:25.651 ] 00:24:25.651 }' 00:24:25.651 14:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.651 14:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.583 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:26.583 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:26.583 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:26.583 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:26.584 [2024-07-25 14:07:15.522987] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:26.584 "name": "Existed_Raid", 00:24:26.584 "aliases": [ 00:24:26.584 "877c1438-c6c0-4ffb-8223-112de48fdd3f" 00:24:26.584 ], 00:24:26.584 "product_name": "Raid Volume", 00:24:26.584 "block_size": 512, 00:24:26.584 "num_blocks": 253952, 00:24:26.584 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:26.584 "assigned_rate_limits": { 00:24:26.584 "rw_ios_per_sec": 0, 00:24:26.584 "rw_mbytes_per_sec": 0, 00:24:26.584 "r_mbytes_per_sec": 0, 00:24:26.584 "w_mbytes_per_sec": 0 00:24:26.584 }, 00:24:26.584 "claimed": false, 00:24:26.584 "zoned": false, 00:24:26.584 "supported_io_types": { 00:24:26.584 "read": true, 00:24:26.584 "write": true, 00:24:26.584 "unmap": true, 00:24:26.584 "flush": true, 00:24:26.584 "reset": true, 00:24:26.584 "nvme_admin": false, 00:24:26.584 "nvme_io": false, 00:24:26.584 "nvme_io_md": false, 00:24:26.584 "write_zeroes": true, 00:24:26.584 "zcopy": false, 00:24:26.584 "get_zone_info": false, 00:24:26.584 "zone_management": false, 00:24:26.584 "zone_append": false, 00:24:26.584 "compare": false, 00:24:26.584 "compare_and_write": false, 00:24:26.584 "abort": false, 00:24:26.584 "seek_hole": false, 00:24:26.584 "seek_data": false, 00:24:26.584 "copy": false, 00:24:26.584 "nvme_iov_md": false 00:24:26.584 }, 00:24:26.584 "memory_domains": [ 00:24:26.584 { 00:24:26.584 "dma_device_id": "system", 00:24:26.584 "dma_device_type": 1 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.584 "dma_device_type": 2 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "system", 00:24:26.584 "dma_device_type": 1 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.584 "dma_device_type": 2 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "system", 00:24:26.584 "dma_device_type": 1 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.584 "dma_device_type": 2 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "system", 00:24:26.584 "dma_device_type": 1 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.584 "dma_device_type": 2 00:24:26.584 } 00:24:26.584 ], 00:24:26.584 "driver_specific": { 00:24:26.584 "raid": { 00:24:26.584 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:26.584 "strip_size_kb": 64, 00:24:26.584 "state": "online", 00:24:26.584 "raid_level": "raid0", 00:24:26.584 "superblock": true, 00:24:26.584 "num_base_bdevs": 4, 00:24:26.584 "num_base_bdevs_discovered": 4, 00:24:26.584 "num_base_bdevs_operational": 4, 00:24:26.584 "base_bdevs_list": [ 00:24:26.584 { 00:24:26.584 "name": "BaseBdev1", 00:24:26.584 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:26.584 "is_configured": true, 00:24:26.584 "data_offset": 2048, 00:24:26.584 "data_size": 63488 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "name": "BaseBdev2", 00:24:26.584 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:26.584 "is_configured": true, 00:24:26.584 "data_offset": 2048, 00:24:26.584 "data_size": 63488 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "name": "BaseBdev3", 00:24:26.584 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:26.584 "is_configured": true, 00:24:26.584 "data_offset": 2048, 00:24:26.584 "data_size": 63488 00:24:26.584 }, 00:24:26.584 { 00:24:26.584 "name": "BaseBdev4", 00:24:26.584 "uuid": "26eba74f-df98-4521-abaf-0892aaba308e", 00:24:26.584 "is_configured": true, 00:24:26.584 "data_offset": 2048, 00:24:26.584 "data_size": 63488 00:24:26.584 } 00:24:26.584 ] 00:24:26.584 } 00:24:26.584 } 00:24:26.584 }' 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:26.584 BaseBdev2 00:24:26.584 BaseBdev3 00:24:26.584 BaseBdev4' 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:26.584 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:27.150 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:27.150 "name": "BaseBdev1", 00:24:27.150 "aliases": [ 00:24:27.150 "3fef2505-7717-444f-b820-5bedd83ebf26" 00:24:27.150 ], 00:24:27.150 "product_name": "Malloc disk", 00:24:27.150 "block_size": 512, 00:24:27.150 "num_blocks": 65536, 00:24:27.150 "uuid": "3fef2505-7717-444f-b820-5bedd83ebf26", 00:24:27.150 "assigned_rate_limits": { 00:24:27.150 "rw_ios_per_sec": 0, 00:24:27.150 "rw_mbytes_per_sec": 0, 00:24:27.150 "r_mbytes_per_sec": 0, 00:24:27.150 "w_mbytes_per_sec": 0 00:24:27.150 }, 00:24:27.151 "claimed": true, 00:24:27.151 "claim_type": "exclusive_write", 00:24:27.151 "zoned": false, 00:24:27.151 "supported_io_types": { 00:24:27.151 "read": true, 00:24:27.151 "write": true, 00:24:27.151 "unmap": true, 00:24:27.151 "flush": true, 00:24:27.151 "reset": true, 00:24:27.151 "nvme_admin": false, 00:24:27.151 "nvme_io": false, 00:24:27.151 "nvme_io_md": false, 00:24:27.151 "write_zeroes": true, 00:24:27.151 "zcopy": true, 00:24:27.151 "get_zone_info": false, 00:24:27.151 "zone_management": false, 00:24:27.151 "zone_append": false, 00:24:27.151 "compare": false, 00:24:27.151 "compare_and_write": false, 00:24:27.151 "abort": true, 00:24:27.151 "seek_hole": false, 00:24:27.151 "seek_data": false, 00:24:27.151 "copy": true, 00:24:27.151 "nvme_iov_md": false 00:24:27.151 }, 00:24:27.151 "memory_domains": [ 00:24:27.151 { 00:24:27.151 "dma_device_id": "system", 00:24:27.151 "dma_device_type": 1 00:24:27.151 }, 00:24:27.151 { 00:24:27.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.151 "dma_device_type": 2 00:24:27.151 } 00:24:27.151 ], 00:24:27.151 "driver_specific": {} 00:24:27.151 }' 00:24:27.151 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:27.151 14:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:27.151 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:27.410 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:27.670 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:27.670 "name": "BaseBdev2", 00:24:27.670 "aliases": [ 00:24:27.670 "f82f77ba-1204-4026-a6b4-2932ce0f7978" 00:24:27.670 ], 00:24:27.670 "product_name": "Malloc disk", 00:24:27.670 "block_size": 512, 00:24:27.670 "num_blocks": 65536, 00:24:27.670 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:27.670 "assigned_rate_limits": { 00:24:27.670 "rw_ios_per_sec": 0, 00:24:27.670 "rw_mbytes_per_sec": 0, 00:24:27.670 "r_mbytes_per_sec": 0, 00:24:27.670 "w_mbytes_per_sec": 0 00:24:27.670 }, 00:24:27.670 "claimed": true, 00:24:27.670 "claim_type": "exclusive_write", 00:24:27.670 "zoned": false, 00:24:27.670 "supported_io_types": { 00:24:27.670 "read": true, 00:24:27.670 "write": true, 00:24:27.670 "unmap": true, 00:24:27.670 "flush": true, 00:24:27.670 "reset": true, 00:24:27.670 "nvme_admin": false, 00:24:27.670 "nvme_io": false, 00:24:27.670 "nvme_io_md": false, 00:24:27.670 "write_zeroes": true, 00:24:27.670 "zcopy": true, 00:24:27.670 "get_zone_info": false, 00:24:27.670 "zone_management": false, 00:24:27.670 "zone_append": false, 00:24:27.670 "compare": false, 00:24:27.670 "compare_and_write": false, 00:24:27.670 "abort": true, 00:24:27.670 "seek_hole": false, 00:24:27.670 "seek_data": false, 00:24:27.670 "copy": true, 00:24:27.670 "nvme_iov_md": false 00:24:27.670 }, 00:24:27.670 "memory_domains": [ 00:24:27.670 { 00:24:27.670 "dma_device_id": "system", 00:24:27.670 "dma_device_type": 1 00:24:27.670 }, 00:24:27.670 { 00:24:27.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.670 "dma_device_type": 2 00:24:27.670 } 00:24:27.670 ], 00:24:27.670 "driver_specific": {} 00:24:27.670 }' 00:24:27.670 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:27.670 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:27.670 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:27.670 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:27.928 14:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:28.186 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.186 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:28.186 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:28.186 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:28.445 "name": "BaseBdev3", 00:24:28.445 "aliases": [ 00:24:28.445 "aaa43ce2-c851-4d1b-89c2-0702a3acbfde" 00:24:28.445 ], 00:24:28.445 "product_name": "Malloc disk", 00:24:28.445 "block_size": 512, 00:24:28.445 "num_blocks": 65536, 00:24:28.445 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:28.445 "assigned_rate_limits": { 00:24:28.445 "rw_ios_per_sec": 0, 00:24:28.445 "rw_mbytes_per_sec": 0, 00:24:28.445 "r_mbytes_per_sec": 0, 00:24:28.445 "w_mbytes_per_sec": 0 00:24:28.445 }, 00:24:28.445 "claimed": true, 00:24:28.445 "claim_type": "exclusive_write", 00:24:28.445 "zoned": false, 00:24:28.445 "supported_io_types": { 00:24:28.445 "read": true, 00:24:28.445 "write": true, 00:24:28.445 "unmap": true, 00:24:28.445 "flush": true, 00:24:28.445 "reset": true, 00:24:28.445 "nvme_admin": false, 00:24:28.445 "nvme_io": false, 00:24:28.445 "nvme_io_md": false, 00:24:28.445 "write_zeroes": true, 00:24:28.445 "zcopy": true, 00:24:28.445 "get_zone_info": false, 00:24:28.445 "zone_management": false, 00:24:28.445 "zone_append": false, 00:24:28.445 "compare": false, 00:24:28.445 "compare_and_write": false, 00:24:28.445 "abort": true, 00:24:28.445 "seek_hole": false, 00:24:28.445 "seek_data": false, 00:24:28.445 "copy": true, 00:24:28.445 "nvme_iov_md": false 00:24:28.445 }, 00:24:28.445 "memory_domains": [ 00:24:28.445 { 00:24:28.445 "dma_device_id": "system", 00:24:28.445 "dma_device_type": 1 00:24:28.445 }, 00:24:28.445 { 00:24:28.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.445 "dma_device_type": 2 00:24:28.445 } 00:24:28.445 ], 00:24:28.445 "driver_specific": {} 00:24:28.445 }' 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:28.445 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:28.703 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:28.960 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:28.960 "name": "BaseBdev4", 00:24:28.960 "aliases": [ 00:24:28.960 "26eba74f-df98-4521-abaf-0892aaba308e" 00:24:28.960 ], 00:24:28.960 "product_name": "Malloc disk", 00:24:28.960 "block_size": 512, 00:24:28.960 "num_blocks": 65536, 00:24:28.960 "uuid": "26eba74f-df98-4521-abaf-0892aaba308e", 00:24:28.960 "assigned_rate_limits": { 00:24:28.960 "rw_ios_per_sec": 0, 00:24:28.960 "rw_mbytes_per_sec": 0, 00:24:28.960 "r_mbytes_per_sec": 0, 00:24:28.960 "w_mbytes_per_sec": 0 00:24:28.960 }, 00:24:28.960 "claimed": true, 00:24:28.960 "claim_type": "exclusive_write", 00:24:28.960 "zoned": false, 00:24:28.960 "supported_io_types": { 00:24:28.960 "read": true, 00:24:28.960 "write": true, 00:24:28.960 "unmap": true, 00:24:28.960 "flush": true, 00:24:28.960 "reset": true, 00:24:28.960 "nvme_admin": false, 00:24:28.960 "nvme_io": false, 00:24:28.960 "nvme_io_md": false, 00:24:28.960 "write_zeroes": true, 00:24:28.960 "zcopy": true, 00:24:28.960 "get_zone_info": false, 00:24:28.960 "zone_management": false, 00:24:28.960 "zone_append": false, 00:24:28.960 "compare": false, 00:24:28.960 "compare_and_write": false, 00:24:28.960 "abort": true, 00:24:28.960 "seek_hole": false, 00:24:28.960 "seek_data": false, 00:24:28.960 "copy": true, 00:24:28.961 "nvme_iov_md": false 00:24:28.961 }, 00:24:28.961 "memory_domains": [ 00:24:28.961 { 00:24:28.961 "dma_device_id": "system", 00:24:28.961 "dma_device_type": 1 00:24:28.961 }, 00:24:28.961 { 00:24:28.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.961 "dma_device_type": 2 00:24:28.961 } 00:24:28.961 ], 00:24:28.961 "driver_specific": {} 00:24:28.961 }' 00:24:28.961 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:28.961 14:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:29.219 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:29.476 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:29.476 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:29.735 [2024-07-25 14:07:18.566465] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.735 [2024-07-25 14:07:18.566520] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.735 [2024-07-25 14:07:18.566577] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.735 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.993 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.993 "name": "Existed_Raid", 00:24:29.993 "uuid": "877c1438-c6c0-4ffb-8223-112de48fdd3f", 00:24:29.993 "strip_size_kb": 64, 00:24:29.993 "state": "offline", 00:24:29.993 "raid_level": "raid0", 00:24:29.993 "superblock": true, 00:24:29.993 "num_base_bdevs": 4, 00:24:29.993 "num_base_bdevs_discovered": 3, 00:24:29.993 "num_base_bdevs_operational": 3, 00:24:29.994 "base_bdevs_list": [ 00:24:29.994 { 00:24:29.994 "name": null, 00:24:29.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.994 "is_configured": false, 00:24:29.994 "data_offset": 2048, 00:24:29.994 "data_size": 63488 00:24:29.994 }, 00:24:29.994 { 00:24:29.994 "name": "BaseBdev2", 00:24:29.994 "uuid": "f82f77ba-1204-4026-a6b4-2932ce0f7978", 00:24:29.994 "is_configured": true, 00:24:29.994 "data_offset": 2048, 00:24:29.994 "data_size": 63488 00:24:29.994 }, 00:24:29.994 { 00:24:29.994 "name": "BaseBdev3", 00:24:29.994 "uuid": "aaa43ce2-c851-4d1b-89c2-0702a3acbfde", 00:24:29.994 "is_configured": true, 00:24:29.994 "data_offset": 2048, 00:24:29.994 "data_size": 63488 00:24:29.994 }, 00:24:29.994 { 00:24:29.994 "name": "BaseBdev4", 00:24:29.994 "uuid": "26eba74f-df98-4521-abaf-0892aaba308e", 00:24:29.994 "is_configured": true, 00:24:29.994 "data_offset": 2048, 00:24:29.994 "data_size": 63488 00:24:29.994 } 00:24:29.994 ] 00:24:29.994 }' 00:24:29.994 14:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.994 14:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.928 14:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:31.185 [2024-07-25 14:07:20.155200] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:31.444 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:31.444 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:31.444 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.444 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:31.702 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:31.702 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:31.702 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:31.702 [2024-07-25 14:07:20.730381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:31.959 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:31.960 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:31.960 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.960 14:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:32.218 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:32.218 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:32.218 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:32.477 [2024-07-25 14:07:21.397144] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:32.477 [2024-07-25 14:07:21.397216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:24:32.477 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:32.477 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:32.477 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.477 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:33.041 14:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:33.299 BaseBdev2 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:33.299 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.556 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:33.814 [ 00:24:33.814 { 00:24:33.814 "name": "BaseBdev2", 00:24:33.814 "aliases": [ 00:24:33.814 "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1" 00:24:33.814 ], 00:24:33.814 "product_name": "Malloc disk", 00:24:33.814 "block_size": 512, 00:24:33.814 "num_blocks": 65536, 00:24:33.814 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:33.814 "assigned_rate_limits": { 00:24:33.814 "rw_ios_per_sec": 0, 00:24:33.814 "rw_mbytes_per_sec": 0, 00:24:33.814 "r_mbytes_per_sec": 0, 00:24:33.814 "w_mbytes_per_sec": 0 00:24:33.814 }, 00:24:33.814 "claimed": false, 00:24:33.814 "zoned": false, 00:24:33.814 "supported_io_types": { 00:24:33.814 "read": true, 00:24:33.814 "write": true, 00:24:33.814 "unmap": true, 00:24:33.814 "flush": true, 00:24:33.814 "reset": true, 00:24:33.814 "nvme_admin": false, 00:24:33.814 "nvme_io": false, 00:24:33.814 "nvme_io_md": false, 00:24:33.814 "write_zeroes": true, 00:24:33.814 "zcopy": true, 00:24:33.814 "get_zone_info": false, 00:24:33.814 "zone_management": false, 00:24:33.814 "zone_append": false, 00:24:33.814 "compare": false, 00:24:33.814 "compare_and_write": false, 00:24:33.814 "abort": true, 00:24:33.814 "seek_hole": false, 00:24:33.814 "seek_data": false, 00:24:33.814 "copy": true, 00:24:33.814 "nvme_iov_md": false 00:24:33.814 }, 00:24:33.814 "memory_domains": [ 00:24:33.814 { 00:24:33.814 "dma_device_id": "system", 00:24:33.814 "dma_device_type": 1 00:24:33.814 }, 00:24:33.814 { 00:24:33.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.814 "dma_device_type": 2 00:24:33.814 } 00:24:33.814 ], 00:24:33.814 "driver_specific": {} 00:24:33.814 } 00:24:33.814 ] 00:24:33.814 14:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:33.814 14:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:33.814 14:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:33.814 14:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:34.072 BaseBdev3 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:34.072 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.636 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:34.894 [ 00:24:34.894 { 00:24:34.894 "name": "BaseBdev3", 00:24:34.894 "aliases": [ 00:24:34.894 "a3115b31-603b-4c15-8276-b463996f0cf5" 00:24:34.894 ], 00:24:34.894 "product_name": "Malloc disk", 00:24:34.894 "block_size": 512, 00:24:34.894 "num_blocks": 65536, 00:24:34.894 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:34.894 "assigned_rate_limits": { 00:24:34.894 "rw_ios_per_sec": 0, 00:24:34.894 "rw_mbytes_per_sec": 0, 00:24:34.894 "r_mbytes_per_sec": 0, 00:24:34.894 "w_mbytes_per_sec": 0 00:24:34.894 }, 00:24:34.894 "claimed": false, 00:24:34.894 "zoned": false, 00:24:34.894 "supported_io_types": { 00:24:34.894 "read": true, 00:24:34.894 "write": true, 00:24:34.894 "unmap": true, 00:24:34.894 "flush": true, 00:24:34.894 "reset": true, 00:24:34.894 "nvme_admin": false, 00:24:34.894 "nvme_io": false, 00:24:34.894 "nvme_io_md": false, 00:24:34.894 "write_zeroes": true, 00:24:34.894 "zcopy": true, 00:24:34.894 "get_zone_info": false, 00:24:34.894 "zone_management": false, 00:24:34.894 "zone_append": false, 00:24:34.894 "compare": false, 00:24:34.894 "compare_and_write": false, 00:24:34.894 "abort": true, 00:24:34.894 "seek_hole": false, 00:24:34.894 "seek_data": false, 00:24:34.894 "copy": true, 00:24:34.894 "nvme_iov_md": false 00:24:34.894 }, 00:24:34.894 "memory_domains": [ 00:24:34.894 { 00:24:34.894 "dma_device_id": "system", 00:24:34.894 "dma_device_type": 1 00:24:34.894 }, 00:24:34.894 { 00:24:34.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.894 "dma_device_type": 2 00:24:34.894 } 00:24:34.894 ], 00:24:34.894 "driver_specific": {} 00:24:34.894 } 00:24:34.894 ] 00:24:34.894 14:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:34.894 14:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:34.894 14:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:34.894 14:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:35.151 BaseBdev4 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:35.151 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.408 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:35.666 [ 00:24:35.666 { 00:24:35.666 "name": "BaseBdev4", 00:24:35.666 "aliases": [ 00:24:35.666 "b2cafa36-090b-440f-8026-7624c738d978" 00:24:35.666 ], 00:24:35.666 "product_name": "Malloc disk", 00:24:35.666 "block_size": 512, 00:24:35.666 "num_blocks": 65536, 00:24:35.666 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:35.666 "assigned_rate_limits": { 00:24:35.666 "rw_ios_per_sec": 0, 00:24:35.666 "rw_mbytes_per_sec": 0, 00:24:35.666 "r_mbytes_per_sec": 0, 00:24:35.666 "w_mbytes_per_sec": 0 00:24:35.666 }, 00:24:35.666 "claimed": false, 00:24:35.666 "zoned": false, 00:24:35.666 "supported_io_types": { 00:24:35.666 "read": true, 00:24:35.666 "write": true, 00:24:35.666 "unmap": true, 00:24:35.666 "flush": true, 00:24:35.666 "reset": true, 00:24:35.666 "nvme_admin": false, 00:24:35.666 "nvme_io": false, 00:24:35.666 "nvme_io_md": false, 00:24:35.666 "write_zeroes": true, 00:24:35.666 "zcopy": true, 00:24:35.666 "get_zone_info": false, 00:24:35.666 "zone_management": false, 00:24:35.666 "zone_append": false, 00:24:35.666 "compare": false, 00:24:35.666 "compare_and_write": false, 00:24:35.666 "abort": true, 00:24:35.666 "seek_hole": false, 00:24:35.666 "seek_data": false, 00:24:35.666 "copy": true, 00:24:35.666 "nvme_iov_md": false 00:24:35.666 }, 00:24:35.666 "memory_domains": [ 00:24:35.666 { 00:24:35.666 "dma_device_id": "system", 00:24:35.666 "dma_device_type": 1 00:24:35.666 }, 00:24:35.666 { 00:24:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.666 "dma_device_type": 2 00:24:35.666 } 00:24:35.666 ], 00:24:35.666 "driver_specific": {} 00:24:35.666 } 00:24:35.666 ] 00:24:35.666 14:07:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:35.666 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:35.666 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:35.666 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:35.923 [2024-07-25 14:07:24.879716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:35.923 [2024-07-25 14:07:24.879825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:35.923 [2024-07-25 14:07:24.879857] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.923 [2024-07-25 14:07:24.882141] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:35.923 [2024-07-25 14:07:24.882215] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.923 14:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.180 14:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.180 "name": "Existed_Raid", 00:24:36.180 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:36.180 "strip_size_kb": 64, 00:24:36.180 "state": "configuring", 00:24:36.180 "raid_level": "raid0", 00:24:36.180 "superblock": true, 00:24:36.180 "num_base_bdevs": 4, 00:24:36.180 "num_base_bdevs_discovered": 3, 00:24:36.180 "num_base_bdevs_operational": 4, 00:24:36.180 "base_bdevs_list": [ 00:24:36.180 { 00:24:36.180 "name": "BaseBdev1", 00:24:36.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.180 "is_configured": false, 00:24:36.180 "data_offset": 0, 00:24:36.180 "data_size": 0 00:24:36.180 }, 00:24:36.180 { 00:24:36.180 "name": "BaseBdev2", 00:24:36.180 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:36.180 "is_configured": true, 00:24:36.180 "data_offset": 2048, 00:24:36.180 "data_size": 63488 00:24:36.180 }, 00:24:36.180 { 00:24:36.180 "name": "BaseBdev3", 00:24:36.180 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:36.180 "is_configured": true, 00:24:36.180 "data_offset": 2048, 00:24:36.180 "data_size": 63488 00:24:36.180 }, 00:24:36.181 { 00:24:36.181 "name": "BaseBdev4", 00:24:36.181 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:36.181 "is_configured": true, 00:24:36.181 "data_offset": 2048, 00:24:36.181 "data_size": 63488 00:24:36.181 } 00:24:36.181 ] 00:24:36.181 }' 00:24:36.181 14:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.181 14:07:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.128 14:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:37.128 [2024-07-25 14:07:26.063898] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.128 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.129 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.386 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.386 "name": "Existed_Raid", 00:24:37.386 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:37.386 "strip_size_kb": 64, 00:24:37.386 "state": "configuring", 00:24:37.386 "raid_level": "raid0", 00:24:37.386 "superblock": true, 00:24:37.386 "num_base_bdevs": 4, 00:24:37.386 "num_base_bdevs_discovered": 2, 00:24:37.386 "num_base_bdevs_operational": 4, 00:24:37.386 "base_bdevs_list": [ 00:24:37.386 { 00:24:37.386 "name": "BaseBdev1", 00:24:37.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.386 "is_configured": false, 00:24:37.386 "data_offset": 0, 00:24:37.386 "data_size": 0 00:24:37.386 }, 00:24:37.386 { 00:24:37.386 "name": null, 00:24:37.386 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:37.386 "is_configured": false, 00:24:37.386 "data_offset": 2048, 00:24:37.386 "data_size": 63488 00:24:37.386 }, 00:24:37.386 { 00:24:37.386 "name": "BaseBdev3", 00:24:37.386 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:37.386 "is_configured": true, 00:24:37.386 "data_offset": 2048, 00:24:37.386 "data_size": 63488 00:24:37.386 }, 00:24:37.386 { 00:24:37.386 "name": "BaseBdev4", 00:24:37.386 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:37.386 "is_configured": true, 00:24:37.386 "data_offset": 2048, 00:24:37.386 "data_size": 63488 00:24:37.386 } 00:24:37.386 ] 00:24:37.386 }' 00:24:37.386 14:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.386 14:07:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.319 14:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.319 14:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:38.578 14:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:38.578 14:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:38.836 [2024-07-25 14:07:27.739395] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.836 BaseBdev1 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:38.836 14:07:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:39.094 14:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:39.351 [ 00:24:39.351 { 00:24:39.351 "name": "BaseBdev1", 00:24:39.351 "aliases": [ 00:24:39.351 "1096161d-f3fa-47a5-9096-fe5b940b7e4e" 00:24:39.351 ], 00:24:39.351 "product_name": "Malloc disk", 00:24:39.351 "block_size": 512, 00:24:39.351 "num_blocks": 65536, 00:24:39.351 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:39.351 "assigned_rate_limits": { 00:24:39.352 "rw_ios_per_sec": 0, 00:24:39.352 "rw_mbytes_per_sec": 0, 00:24:39.352 "r_mbytes_per_sec": 0, 00:24:39.352 "w_mbytes_per_sec": 0 00:24:39.352 }, 00:24:39.352 "claimed": true, 00:24:39.352 "claim_type": "exclusive_write", 00:24:39.352 "zoned": false, 00:24:39.352 "supported_io_types": { 00:24:39.352 "read": true, 00:24:39.352 "write": true, 00:24:39.352 "unmap": true, 00:24:39.352 "flush": true, 00:24:39.352 "reset": true, 00:24:39.352 "nvme_admin": false, 00:24:39.352 "nvme_io": false, 00:24:39.352 "nvme_io_md": false, 00:24:39.352 "write_zeroes": true, 00:24:39.352 "zcopy": true, 00:24:39.352 "get_zone_info": false, 00:24:39.352 "zone_management": false, 00:24:39.352 "zone_append": false, 00:24:39.352 "compare": false, 00:24:39.352 "compare_and_write": false, 00:24:39.352 "abort": true, 00:24:39.352 "seek_hole": false, 00:24:39.352 "seek_data": false, 00:24:39.352 "copy": true, 00:24:39.352 "nvme_iov_md": false 00:24:39.352 }, 00:24:39.352 "memory_domains": [ 00:24:39.352 { 00:24:39.352 "dma_device_id": "system", 00:24:39.352 "dma_device_type": 1 00:24:39.352 }, 00:24:39.352 { 00:24:39.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.352 "dma_device_type": 2 00:24:39.352 } 00:24:39.352 ], 00:24:39.352 "driver_specific": {} 00:24:39.352 } 00:24:39.352 ] 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.352 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.610 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.610 "name": "Existed_Raid", 00:24:39.610 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:39.610 "strip_size_kb": 64, 00:24:39.610 "state": "configuring", 00:24:39.610 "raid_level": "raid0", 00:24:39.610 "superblock": true, 00:24:39.610 "num_base_bdevs": 4, 00:24:39.610 "num_base_bdevs_discovered": 3, 00:24:39.610 "num_base_bdevs_operational": 4, 00:24:39.610 "base_bdevs_list": [ 00:24:39.610 { 00:24:39.610 "name": "BaseBdev1", 00:24:39.610 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:39.610 "is_configured": true, 00:24:39.610 "data_offset": 2048, 00:24:39.610 "data_size": 63488 00:24:39.610 }, 00:24:39.610 { 00:24:39.610 "name": null, 00:24:39.610 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:39.610 "is_configured": false, 00:24:39.610 "data_offset": 2048, 00:24:39.610 "data_size": 63488 00:24:39.610 }, 00:24:39.610 { 00:24:39.610 "name": "BaseBdev3", 00:24:39.610 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:39.610 "is_configured": true, 00:24:39.610 "data_offset": 2048, 00:24:39.610 "data_size": 63488 00:24:39.610 }, 00:24:39.610 { 00:24:39.610 "name": "BaseBdev4", 00:24:39.610 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:39.610 "is_configured": true, 00:24:39.610 "data_offset": 2048, 00:24:39.610 "data_size": 63488 00:24:39.610 } 00:24:39.610 ] 00:24:39.610 }' 00:24:39.610 14:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.610 14:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.543 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:40.543 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.543 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:40.543 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:40.801 [2024-07-25 14:07:29.759952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.801 14:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.058 14:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.058 "name": "Existed_Raid", 00:24:41.058 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:41.058 "strip_size_kb": 64, 00:24:41.058 "state": "configuring", 00:24:41.058 "raid_level": "raid0", 00:24:41.058 "superblock": true, 00:24:41.058 "num_base_bdevs": 4, 00:24:41.058 "num_base_bdevs_discovered": 2, 00:24:41.058 "num_base_bdevs_operational": 4, 00:24:41.058 "base_bdevs_list": [ 00:24:41.058 { 00:24:41.058 "name": "BaseBdev1", 00:24:41.058 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:41.058 "is_configured": true, 00:24:41.058 "data_offset": 2048, 00:24:41.058 "data_size": 63488 00:24:41.058 }, 00:24:41.058 { 00:24:41.058 "name": null, 00:24:41.058 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:41.058 "is_configured": false, 00:24:41.058 "data_offset": 2048, 00:24:41.058 "data_size": 63488 00:24:41.058 }, 00:24:41.058 { 00:24:41.058 "name": null, 00:24:41.059 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:41.059 "is_configured": false, 00:24:41.059 "data_offset": 2048, 00:24:41.059 "data_size": 63488 00:24:41.059 }, 00:24:41.059 { 00:24:41.059 "name": "BaseBdev4", 00:24:41.059 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:41.059 "is_configured": true, 00:24:41.059 "data_offset": 2048, 00:24:41.059 "data_size": 63488 00:24:41.059 } 00:24:41.059 ] 00:24:41.059 }' 00:24:41.059 14:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.059 14:07:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.991 14:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.991 14:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:42.249 [2024-07-25 14:07:31.252354] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.249 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.813 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.813 "name": "Existed_Raid", 00:24:42.813 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:42.813 "strip_size_kb": 64, 00:24:42.813 "state": "configuring", 00:24:42.813 "raid_level": "raid0", 00:24:42.813 "superblock": true, 00:24:42.813 "num_base_bdevs": 4, 00:24:42.813 "num_base_bdevs_discovered": 3, 00:24:42.813 "num_base_bdevs_operational": 4, 00:24:42.813 "base_bdevs_list": [ 00:24:42.813 { 00:24:42.813 "name": "BaseBdev1", 00:24:42.813 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:42.813 "is_configured": true, 00:24:42.813 "data_offset": 2048, 00:24:42.813 "data_size": 63488 00:24:42.813 }, 00:24:42.813 { 00:24:42.813 "name": null, 00:24:42.813 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:42.813 "is_configured": false, 00:24:42.813 "data_offset": 2048, 00:24:42.813 "data_size": 63488 00:24:42.813 }, 00:24:42.813 { 00:24:42.813 "name": "BaseBdev3", 00:24:42.813 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:42.813 "is_configured": true, 00:24:42.813 "data_offset": 2048, 00:24:42.813 "data_size": 63488 00:24:42.813 }, 00:24:42.813 { 00:24:42.813 "name": "BaseBdev4", 00:24:42.813 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:42.813 "is_configured": true, 00:24:42.813 "data_offset": 2048, 00:24:42.813 "data_size": 63488 00:24:42.813 } 00:24:42.813 ] 00:24:42.813 }' 00:24:42.813 14:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.813 14:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.379 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.379 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:43.640 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:43.640 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:43.903 [2024-07-25 14:07:32.776669] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.903 14:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.161 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.161 "name": "Existed_Raid", 00:24:44.161 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:44.161 "strip_size_kb": 64, 00:24:44.161 "state": "configuring", 00:24:44.161 "raid_level": "raid0", 00:24:44.161 "superblock": true, 00:24:44.161 "num_base_bdevs": 4, 00:24:44.161 "num_base_bdevs_discovered": 2, 00:24:44.161 "num_base_bdevs_operational": 4, 00:24:44.161 "base_bdevs_list": [ 00:24:44.161 { 00:24:44.161 "name": null, 00:24:44.161 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:44.161 "is_configured": false, 00:24:44.161 "data_offset": 2048, 00:24:44.161 "data_size": 63488 00:24:44.161 }, 00:24:44.161 { 00:24:44.161 "name": null, 00:24:44.161 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:44.161 "is_configured": false, 00:24:44.161 "data_offset": 2048, 00:24:44.161 "data_size": 63488 00:24:44.161 }, 00:24:44.161 { 00:24:44.161 "name": "BaseBdev3", 00:24:44.161 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:44.161 "is_configured": true, 00:24:44.161 "data_offset": 2048, 00:24:44.161 "data_size": 63488 00:24:44.161 }, 00:24:44.161 { 00:24:44.161 "name": "BaseBdev4", 00:24:44.161 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:44.161 "is_configured": true, 00:24:44.161 "data_offset": 2048, 00:24:44.161 "data_size": 63488 00:24:44.161 } 00:24:44.161 ] 00:24:44.161 }' 00:24:44.161 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.161 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.094 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.094 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:45.094 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:45.094 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:45.352 [2024-07-25 14:07:34.326383] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.352 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.610 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.610 "name": "Existed_Raid", 00:24:45.610 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:45.610 "strip_size_kb": 64, 00:24:45.610 "state": "configuring", 00:24:45.610 "raid_level": "raid0", 00:24:45.610 "superblock": true, 00:24:45.610 "num_base_bdevs": 4, 00:24:45.610 "num_base_bdevs_discovered": 3, 00:24:45.610 "num_base_bdevs_operational": 4, 00:24:45.610 "base_bdevs_list": [ 00:24:45.610 { 00:24:45.610 "name": null, 00:24:45.610 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:45.610 "is_configured": false, 00:24:45.610 "data_offset": 2048, 00:24:45.610 "data_size": 63488 00:24:45.610 }, 00:24:45.610 { 00:24:45.610 "name": "BaseBdev2", 00:24:45.610 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:45.610 "is_configured": true, 00:24:45.610 "data_offset": 2048, 00:24:45.610 "data_size": 63488 00:24:45.610 }, 00:24:45.610 { 00:24:45.610 "name": "BaseBdev3", 00:24:45.610 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:45.610 "is_configured": true, 00:24:45.610 "data_offset": 2048, 00:24:45.610 "data_size": 63488 00:24:45.610 }, 00:24:45.610 { 00:24:45.610 "name": "BaseBdev4", 00:24:45.610 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:45.610 "is_configured": true, 00:24:45.610 "data_offset": 2048, 00:24:45.610 "data_size": 63488 00:24:45.610 } 00:24:45.610 ] 00:24:45.610 }' 00:24:45.610 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.610 14:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.176 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.176 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:46.433 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:46.433 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.433 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:46.691 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1096161d-f3fa-47a5-9096-fe5b940b7e4e 00:24:46.947 [2024-07-25 14:07:35.950051] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:46.947 [2024-07-25 14:07:35.950568] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:24:46.947 [2024-07-25 14:07:35.950711] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:46.947 [2024-07-25 14:07:35.950881] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:46.947 [2024-07-25 14:07:35.951294] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:24:46.947 [2024-07-25 14:07:35.951435] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:24:46.947 NewBaseBdev 00:24:46.947 [2024-07-25 14:07:35.951693] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:46.947 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:47.204 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:47.462 [ 00:24:47.462 { 00:24:47.462 "name": "NewBaseBdev", 00:24:47.462 "aliases": [ 00:24:47.462 "1096161d-f3fa-47a5-9096-fe5b940b7e4e" 00:24:47.462 ], 00:24:47.462 "product_name": "Malloc disk", 00:24:47.462 "block_size": 512, 00:24:47.462 "num_blocks": 65536, 00:24:47.462 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:47.462 "assigned_rate_limits": { 00:24:47.462 "rw_ios_per_sec": 0, 00:24:47.462 "rw_mbytes_per_sec": 0, 00:24:47.462 "r_mbytes_per_sec": 0, 00:24:47.462 "w_mbytes_per_sec": 0 00:24:47.462 }, 00:24:47.462 "claimed": true, 00:24:47.462 "claim_type": "exclusive_write", 00:24:47.462 "zoned": false, 00:24:47.462 "supported_io_types": { 00:24:47.462 "read": true, 00:24:47.462 "write": true, 00:24:47.462 "unmap": true, 00:24:47.462 "flush": true, 00:24:47.462 "reset": true, 00:24:47.462 "nvme_admin": false, 00:24:47.462 "nvme_io": false, 00:24:47.462 "nvme_io_md": false, 00:24:47.462 "write_zeroes": true, 00:24:47.462 "zcopy": true, 00:24:47.462 "get_zone_info": false, 00:24:47.462 "zone_management": false, 00:24:47.462 "zone_append": false, 00:24:47.462 "compare": false, 00:24:47.462 "compare_and_write": false, 00:24:47.462 "abort": true, 00:24:47.462 "seek_hole": false, 00:24:47.462 "seek_data": false, 00:24:47.462 "copy": true, 00:24:47.462 "nvme_iov_md": false 00:24:47.462 }, 00:24:47.462 "memory_domains": [ 00:24:47.462 { 00:24:47.462 "dma_device_id": "system", 00:24:47.462 "dma_device_type": 1 00:24:47.462 }, 00:24:47.462 { 00:24:47.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.462 "dma_device_type": 2 00:24:47.462 } 00:24:47.462 ], 00:24:47.462 "driver_specific": {} 00:24:47.462 } 00:24:47.462 ] 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.462 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.029 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.029 "name": "Existed_Raid", 00:24:48.029 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:48.029 "strip_size_kb": 64, 00:24:48.029 "state": "online", 00:24:48.029 "raid_level": "raid0", 00:24:48.029 "superblock": true, 00:24:48.029 "num_base_bdevs": 4, 00:24:48.029 "num_base_bdevs_discovered": 4, 00:24:48.029 "num_base_bdevs_operational": 4, 00:24:48.029 "base_bdevs_list": [ 00:24:48.029 { 00:24:48.029 "name": "NewBaseBdev", 00:24:48.029 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:48.029 "is_configured": true, 00:24:48.029 "data_offset": 2048, 00:24:48.029 "data_size": 63488 00:24:48.029 }, 00:24:48.029 { 00:24:48.029 "name": "BaseBdev2", 00:24:48.029 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:48.029 "is_configured": true, 00:24:48.029 "data_offset": 2048, 00:24:48.029 "data_size": 63488 00:24:48.029 }, 00:24:48.029 { 00:24:48.029 "name": "BaseBdev3", 00:24:48.029 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:48.029 "is_configured": true, 00:24:48.029 "data_offset": 2048, 00:24:48.029 "data_size": 63488 00:24:48.029 }, 00:24:48.029 { 00:24:48.029 "name": "BaseBdev4", 00:24:48.029 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:48.029 "is_configured": true, 00:24:48.029 "data_offset": 2048, 00:24:48.029 "data_size": 63488 00:24:48.029 } 00:24:48.029 ] 00:24:48.029 }' 00:24:48.029 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.029 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:48.595 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:48.854 [2024-07-25 14:07:37.698888] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:48.854 "name": "Existed_Raid", 00:24:48.854 "aliases": [ 00:24:48.854 "82a109e1-eee7-406a-9bcc-3c72b21915ba" 00:24:48.854 ], 00:24:48.854 "product_name": "Raid Volume", 00:24:48.854 "block_size": 512, 00:24:48.854 "num_blocks": 253952, 00:24:48.854 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:48.854 "assigned_rate_limits": { 00:24:48.854 "rw_ios_per_sec": 0, 00:24:48.854 "rw_mbytes_per_sec": 0, 00:24:48.854 "r_mbytes_per_sec": 0, 00:24:48.854 "w_mbytes_per_sec": 0 00:24:48.854 }, 00:24:48.854 "claimed": false, 00:24:48.854 "zoned": false, 00:24:48.854 "supported_io_types": { 00:24:48.854 "read": true, 00:24:48.854 "write": true, 00:24:48.854 "unmap": true, 00:24:48.854 "flush": true, 00:24:48.854 "reset": true, 00:24:48.854 "nvme_admin": false, 00:24:48.854 "nvme_io": false, 00:24:48.854 "nvme_io_md": false, 00:24:48.854 "write_zeroes": true, 00:24:48.854 "zcopy": false, 00:24:48.854 "get_zone_info": false, 00:24:48.854 "zone_management": false, 00:24:48.854 "zone_append": false, 00:24:48.854 "compare": false, 00:24:48.854 "compare_and_write": false, 00:24:48.854 "abort": false, 00:24:48.854 "seek_hole": false, 00:24:48.854 "seek_data": false, 00:24:48.854 "copy": false, 00:24:48.854 "nvme_iov_md": false 00:24:48.854 }, 00:24:48.854 "memory_domains": [ 00:24:48.854 { 00:24:48.854 "dma_device_id": "system", 00:24:48.854 "dma_device_type": 1 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.854 "dma_device_type": 2 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "system", 00:24:48.854 "dma_device_type": 1 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.854 "dma_device_type": 2 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "system", 00:24:48.854 "dma_device_type": 1 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.854 "dma_device_type": 2 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "system", 00:24:48.854 "dma_device_type": 1 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.854 "dma_device_type": 2 00:24:48.854 } 00:24:48.854 ], 00:24:48.854 "driver_specific": { 00:24:48.854 "raid": { 00:24:48.854 "uuid": "82a109e1-eee7-406a-9bcc-3c72b21915ba", 00:24:48.854 "strip_size_kb": 64, 00:24:48.854 "state": "online", 00:24:48.854 "raid_level": "raid0", 00:24:48.854 "superblock": true, 00:24:48.854 "num_base_bdevs": 4, 00:24:48.854 "num_base_bdevs_discovered": 4, 00:24:48.854 "num_base_bdevs_operational": 4, 00:24:48.854 "base_bdevs_list": [ 00:24:48.854 { 00:24:48.854 "name": "NewBaseBdev", 00:24:48.854 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:48.854 "is_configured": true, 00:24:48.854 "data_offset": 2048, 00:24:48.854 "data_size": 63488 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "name": "BaseBdev2", 00:24:48.854 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:48.854 "is_configured": true, 00:24:48.854 "data_offset": 2048, 00:24:48.854 "data_size": 63488 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "name": "BaseBdev3", 00:24:48.854 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:48.854 "is_configured": true, 00:24:48.854 "data_offset": 2048, 00:24:48.854 "data_size": 63488 00:24:48.854 }, 00:24:48.854 { 00:24:48.854 "name": "BaseBdev4", 00:24:48.854 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:48.854 "is_configured": true, 00:24:48.854 "data_offset": 2048, 00:24:48.854 "data_size": 63488 00:24:48.854 } 00:24:48.854 ] 00:24:48.854 } 00:24:48.854 } 00:24:48.854 }' 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:48.854 BaseBdev2 00:24:48.854 BaseBdev3 00:24:48.854 BaseBdev4' 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:48.854 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.113 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.113 "name": "NewBaseBdev", 00:24:49.113 "aliases": [ 00:24:49.113 "1096161d-f3fa-47a5-9096-fe5b940b7e4e" 00:24:49.113 ], 00:24:49.113 "product_name": "Malloc disk", 00:24:49.113 "block_size": 512, 00:24:49.113 "num_blocks": 65536, 00:24:49.113 "uuid": "1096161d-f3fa-47a5-9096-fe5b940b7e4e", 00:24:49.113 "assigned_rate_limits": { 00:24:49.113 "rw_ios_per_sec": 0, 00:24:49.113 "rw_mbytes_per_sec": 0, 00:24:49.113 "r_mbytes_per_sec": 0, 00:24:49.113 "w_mbytes_per_sec": 0 00:24:49.113 }, 00:24:49.113 "claimed": true, 00:24:49.113 "claim_type": "exclusive_write", 00:24:49.113 "zoned": false, 00:24:49.113 "supported_io_types": { 00:24:49.113 "read": true, 00:24:49.113 "write": true, 00:24:49.113 "unmap": true, 00:24:49.113 "flush": true, 00:24:49.113 "reset": true, 00:24:49.113 "nvme_admin": false, 00:24:49.113 "nvme_io": false, 00:24:49.113 "nvme_io_md": false, 00:24:49.113 "write_zeroes": true, 00:24:49.113 "zcopy": true, 00:24:49.113 "get_zone_info": false, 00:24:49.113 "zone_management": false, 00:24:49.113 "zone_append": false, 00:24:49.113 "compare": false, 00:24:49.113 "compare_and_write": false, 00:24:49.113 "abort": true, 00:24:49.113 "seek_hole": false, 00:24:49.113 "seek_data": false, 00:24:49.113 "copy": true, 00:24:49.113 "nvme_iov_md": false 00:24:49.113 }, 00:24:49.113 "memory_domains": [ 00:24:49.113 { 00:24:49.113 "dma_device_id": "system", 00:24:49.113 "dma_device_type": 1 00:24:49.113 }, 00:24:49.113 { 00:24:49.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.113 "dma_device_type": 2 00:24:49.113 } 00:24:49.113 ], 00:24:49.113 "driver_specific": {} 00:24:49.113 }' 00:24:49.113 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.113 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.113 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.113 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.372 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.630 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.630 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.631 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:49.631 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.889 "name": "BaseBdev2", 00:24:49.889 "aliases": [ 00:24:49.889 "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1" 00:24:49.889 ], 00:24:49.889 "product_name": "Malloc disk", 00:24:49.889 "block_size": 512, 00:24:49.889 "num_blocks": 65536, 00:24:49.889 "uuid": "a0ee9d8e-a60b-4d03-a61d-223bd6f4e8c1", 00:24:49.889 "assigned_rate_limits": { 00:24:49.889 "rw_ios_per_sec": 0, 00:24:49.889 "rw_mbytes_per_sec": 0, 00:24:49.889 "r_mbytes_per_sec": 0, 00:24:49.889 "w_mbytes_per_sec": 0 00:24:49.889 }, 00:24:49.889 "claimed": true, 00:24:49.889 "claim_type": "exclusive_write", 00:24:49.889 "zoned": false, 00:24:49.889 "supported_io_types": { 00:24:49.889 "read": true, 00:24:49.889 "write": true, 00:24:49.889 "unmap": true, 00:24:49.889 "flush": true, 00:24:49.889 "reset": true, 00:24:49.889 "nvme_admin": false, 00:24:49.889 "nvme_io": false, 00:24:49.889 "nvme_io_md": false, 00:24:49.889 "write_zeroes": true, 00:24:49.889 "zcopy": true, 00:24:49.889 "get_zone_info": false, 00:24:49.889 "zone_management": false, 00:24:49.889 "zone_append": false, 00:24:49.889 "compare": false, 00:24:49.889 "compare_and_write": false, 00:24:49.889 "abort": true, 00:24:49.889 "seek_hole": false, 00:24:49.889 "seek_data": false, 00:24:49.889 "copy": true, 00:24:49.889 "nvme_iov_md": false 00:24:49.889 }, 00:24:49.889 "memory_domains": [ 00:24:49.889 { 00:24:49.889 "dma_device_id": "system", 00:24:49.889 "dma_device_type": 1 00:24:49.889 }, 00:24:49.889 { 00:24:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.889 "dma_device_type": 2 00:24:49.889 } 00:24:49.889 ], 00:24:49.889 "driver_specific": {} 00:24:49.889 }' 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.889 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.147 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.147 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.148 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.148 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.148 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.148 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:50.148 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:50.148 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:50.406 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:50.406 "name": "BaseBdev3", 00:24:50.406 "aliases": [ 00:24:50.406 "a3115b31-603b-4c15-8276-b463996f0cf5" 00:24:50.406 ], 00:24:50.406 "product_name": "Malloc disk", 00:24:50.406 "block_size": 512, 00:24:50.406 "num_blocks": 65536, 00:24:50.406 "uuid": "a3115b31-603b-4c15-8276-b463996f0cf5", 00:24:50.406 "assigned_rate_limits": { 00:24:50.406 "rw_ios_per_sec": 0, 00:24:50.406 "rw_mbytes_per_sec": 0, 00:24:50.406 "r_mbytes_per_sec": 0, 00:24:50.406 "w_mbytes_per_sec": 0 00:24:50.406 }, 00:24:50.406 "claimed": true, 00:24:50.406 "claim_type": "exclusive_write", 00:24:50.406 "zoned": false, 00:24:50.406 "supported_io_types": { 00:24:50.406 "read": true, 00:24:50.406 "write": true, 00:24:50.406 "unmap": true, 00:24:50.406 "flush": true, 00:24:50.406 "reset": true, 00:24:50.406 "nvme_admin": false, 00:24:50.406 "nvme_io": false, 00:24:50.406 "nvme_io_md": false, 00:24:50.406 "write_zeroes": true, 00:24:50.406 "zcopy": true, 00:24:50.406 "get_zone_info": false, 00:24:50.406 "zone_management": false, 00:24:50.406 "zone_append": false, 00:24:50.406 "compare": false, 00:24:50.406 "compare_and_write": false, 00:24:50.406 "abort": true, 00:24:50.406 "seek_hole": false, 00:24:50.406 "seek_data": false, 00:24:50.406 "copy": true, 00:24:50.406 "nvme_iov_md": false 00:24:50.406 }, 00:24:50.406 "memory_domains": [ 00:24:50.406 { 00:24:50.406 "dma_device_id": "system", 00:24:50.406 "dma_device_type": 1 00:24:50.406 }, 00:24:50.406 { 00:24:50.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.406 "dma_device_type": 2 00:24:50.406 } 00:24:50.406 ], 00:24:50.406 "driver_specific": {} 00:24:50.406 }' 00:24:50.406 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.406 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.664 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.921 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.921 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.922 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:50.922 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:50.922 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:51.179 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:51.179 "name": "BaseBdev4", 00:24:51.179 "aliases": [ 00:24:51.179 "b2cafa36-090b-440f-8026-7624c738d978" 00:24:51.179 ], 00:24:51.179 "product_name": "Malloc disk", 00:24:51.179 "block_size": 512, 00:24:51.179 "num_blocks": 65536, 00:24:51.179 "uuid": "b2cafa36-090b-440f-8026-7624c738d978", 00:24:51.179 "assigned_rate_limits": { 00:24:51.179 "rw_ios_per_sec": 0, 00:24:51.179 "rw_mbytes_per_sec": 0, 00:24:51.179 "r_mbytes_per_sec": 0, 00:24:51.179 "w_mbytes_per_sec": 0 00:24:51.179 }, 00:24:51.179 "claimed": true, 00:24:51.179 "claim_type": "exclusive_write", 00:24:51.179 "zoned": false, 00:24:51.179 "supported_io_types": { 00:24:51.179 "read": true, 00:24:51.179 "write": true, 00:24:51.179 "unmap": true, 00:24:51.179 "flush": true, 00:24:51.179 "reset": true, 00:24:51.179 "nvme_admin": false, 00:24:51.179 "nvme_io": false, 00:24:51.179 "nvme_io_md": false, 00:24:51.179 "write_zeroes": true, 00:24:51.179 "zcopy": true, 00:24:51.179 "get_zone_info": false, 00:24:51.179 "zone_management": false, 00:24:51.179 "zone_append": false, 00:24:51.179 "compare": false, 00:24:51.179 "compare_and_write": false, 00:24:51.179 "abort": true, 00:24:51.179 "seek_hole": false, 00:24:51.179 "seek_data": false, 00:24:51.179 "copy": true, 00:24:51.179 "nvme_iov_md": false 00:24:51.179 }, 00:24:51.179 "memory_domains": [ 00:24:51.179 { 00:24:51.179 "dma_device_id": "system", 00:24:51.179 "dma_device_type": 1 00:24:51.179 }, 00:24:51.179 { 00:24:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.179 "dma_device_type": 2 00:24:51.179 } 00:24:51.179 ], 00:24:51.179 "driver_specific": {} 00:24:51.179 }' 00:24:51.179 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:51.179 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:51.437 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:51.747 [2024-07-25 14:07:40.651196] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:51.747 [2024-07-25 14:07:40.651437] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:51.747 [2024-07-25 14:07:40.651651] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.747 [2024-07-25 14:07:40.651852] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.747 [2024-07-25 14:07:40.651998] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 135296 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 135296 ']' 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 135296 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135296 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:51.747 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135296' 00:24:51.748 killing process with pid 135296 00:24:51.748 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 135296 00:24:51.748 [2024-07-25 14:07:40.692619] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.748 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 135296 00:24:52.005 [2024-07-25 14:07:41.012099] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.378 ************************************ 00:24:53.378 END TEST raid_state_function_test_sb 00:24:53.378 ************************************ 00:24:53.378 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:53.378 00:24:53.378 real 0m37.927s 00:24:53.378 user 1m10.872s 00:24:53.378 sys 0m4.110s 00:24:53.378 14:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.378 14:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.378 14:07:42 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:53.378 14:07:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:53.378 14:07:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.378 14:07:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:53.378 ************************************ 00:24:53.378 START TEST raid_superblock_test 00:24:53.378 ************************************ 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:53.378 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=136435 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 136435 /var/tmp/spdk-raid.sock 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 136435 ']' 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.379 14:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.379 [2024-07-25 14:07:42.246290] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:24:53.379 [2024-07-25 14:07:42.247239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136435 ] 00:24:53.379 [2024-07-25 14:07:42.410402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.636 [2024-07-25 14:07:42.651667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.894 [2024-07-25 14:07:42.849286] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:54.152 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:54.410 malloc1 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:54.667 [2024-07-25 14:07:43.685024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:54.667 [2024-07-25 14:07:43.685444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.667 [2024-07-25 14:07:43.685612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:54.667 [2024-07-25 14:07:43.685742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.667 [2024-07-25 14:07:43.688395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.667 [2024-07-25 14:07:43.688575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:54.667 pt1 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:54.667 14:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:55.233 malloc2 00:24:55.233 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:55.491 [2024-07-25 14:07:44.280063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:55.491 [2024-07-25 14:07:44.280571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.491 [2024-07-25 14:07:44.280746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:55.491 [2024-07-25 14:07:44.280907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.491 [2024-07-25 14:07:44.283738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.491 [2024-07-25 14:07:44.283946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:55.491 pt2 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:55.491 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:55.749 malloc3 00:24:55.749 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:56.006 [2024-07-25 14:07:44.811453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:56.007 [2024-07-25 14:07:44.811765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.007 [2024-07-25 14:07:44.811934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:56.007 [2024-07-25 14:07:44.812080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.007 [2024-07-25 14:07:44.814652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.007 [2024-07-25 14:07:44.814840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:56.007 pt3 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:56.007 14:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:56.265 malloc4 00:24:56.265 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:56.528 [2024-07-25 14:07:45.351364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:56.528 [2024-07-25 14:07:45.351664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.528 [2024-07-25 14:07:45.351826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:56.528 [2024-07-25 14:07:45.351982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.528 [2024-07-25 14:07:45.354675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.528 [2024-07-25 14:07:45.354859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:56.528 pt4 00:24:56.528 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:56.528 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:56.528 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:56.815 [2024-07-25 14:07:45.615630] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:56.815 [2024-07-25 14:07:45.617989] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:56.815 [2024-07-25 14:07:45.618202] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:56.815 [2024-07-25 14:07:45.618439] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:56.815 [2024-07-25 14:07:45.618788] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:24:56.815 [2024-07-25 14:07:45.618924] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:56.815 [2024-07-25 14:07:45.619141] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:56.815 [2024-07-25 14:07:45.619686] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:24:56.815 [2024-07-25 14:07:45.619819] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:24:56.815 [2024-07-25 14:07:45.620187] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.815 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.074 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.074 "name": "raid_bdev1", 00:24:57.074 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:24:57.074 "strip_size_kb": 64, 00:24:57.074 "state": "online", 00:24:57.074 "raid_level": "raid0", 00:24:57.074 "superblock": true, 00:24:57.074 "num_base_bdevs": 4, 00:24:57.074 "num_base_bdevs_discovered": 4, 00:24:57.074 "num_base_bdevs_operational": 4, 00:24:57.074 "base_bdevs_list": [ 00:24:57.074 { 00:24:57.074 "name": "pt1", 00:24:57.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:57.074 "is_configured": true, 00:24:57.074 "data_offset": 2048, 00:24:57.074 "data_size": 63488 00:24:57.074 }, 00:24:57.074 { 00:24:57.074 "name": "pt2", 00:24:57.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:57.074 "is_configured": true, 00:24:57.074 "data_offset": 2048, 00:24:57.074 "data_size": 63488 00:24:57.074 }, 00:24:57.074 { 00:24:57.074 "name": "pt3", 00:24:57.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:57.074 "is_configured": true, 00:24:57.074 "data_offset": 2048, 00:24:57.074 "data_size": 63488 00:24:57.074 }, 00:24:57.074 { 00:24:57.074 "name": "pt4", 00:24:57.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:57.074 "is_configured": true, 00:24:57.074 "data_offset": 2048, 00:24:57.074 "data_size": 63488 00:24:57.074 } 00:24:57.074 ] 00:24:57.074 }' 00:24:57.074 14:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.074 14:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:57.640 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:57.898 [2024-07-25 14:07:46.828716] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:57.898 "name": "raid_bdev1", 00:24:57.898 "aliases": [ 00:24:57.898 "68d17a36-0d16-494f-b2ee-53faf3980135" 00:24:57.898 ], 00:24:57.898 "product_name": "Raid Volume", 00:24:57.898 "block_size": 512, 00:24:57.898 "num_blocks": 253952, 00:24:57.898 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:24:57.898 "assigned_rate_limits": { 00:24:57.898 "rw_ios_per_sec": 0, 00:24:57.898 "rw_mbytes_per_sec": 0, 00:24:57.898 "r_mbytes_per_sec": 0, 00:24:57.898 "w_mbytes_per_sec": 0 00:24:57.898 }, 00:24:57.898 "claimed": false, 00:24:57.898 "zoned": false, 00:24:57.898 "supported_io_types": { 00:24:57.898 "read": true, 00:24:57.898 "write": true, 00:24:57.898 "unmap": true, 00:24:57.898 "flush": true, 00:24:57.898 "reset": true, 00:24:57.898 "nvme_admin": false, 00:24:57.898 "nvme_io": false, 00:24:57.898 "nvme_io_md": false, 00:24:57.898 "write_zeroes": true, 00:24:57.898 "zcopy": false, 00:24:57.898 "get_zone_info": false, 00:24:57.898 "zone_management": false, 00:24:57.898 "zone_append": false, 00:24:57.898 "compare": false, 00:24:57.898 "compare_and_write": false, 00:24:57.898 "abort": false, 00:24:57.898 "seek_hole": false, 00:24:57.898 "seek_data": false, 00:24:57.898 "copy": false, 00:24:57.898 "nvme_iov_md": false 00:24:57.898 }, 00:24:57.898 "memory_domains": [ 00:24:57.898 { 00:24:57.898 "dma_device_id": "system", 00:24:57.898 "dma_device_type": 1 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.898 "dma_device_type": 2 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "system", 00:24:57.898 "dma_device_type": 1 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.898 "dma_device_type": 2 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "system", 00:24:57.898 "dma_device_type": 1 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.898 "dma_device_type": 2 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "system", 00:24:57.898 "dma_device_type": 1 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.898 "dma_device_type": 2 00:24:57.898 } 00:24:57.898 ], 00:24:57.898 "driver_specific": { 00:24:57.898 "raid": { 00:24:57.898 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:24:57.898 "strip_size_kb": 64, 00:24:57.898 "state": "online", 00:24:57.898 "raid_level": "raid0", 00:24:57.898 "superblock": true, 00:24:57.898 "num_base_bdevs": 4, 00:24:57.898 "num_base_bdevs_discovered": 4, 00:24:57.898 "num_base_bdevs_operational": 4, 00:24:57.898 "base_bdevs_list": [ 00:24:57.898 { 00:24:57.898 "name": "pt1", 00:24:57.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:57.898 "is_configured": true, 00:24:57.898 "data_offset": 2048, 00:24:57.898 "data_size": 63488 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "name": "pt2", 00:24:57.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:57.898 "is_configured": true, 00:24:57.898 "data_offset": 2048, 00:24:57.898 "data_size": 63488 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "name": "pt3", 00:24:57.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:57.898 "is_configured": true, 00:24:57.898 "data_offset": 2048, 00:24:57.898 "data_size": 63488 00:24:57.898 }, 00:24:57.898 { 00:24:57.898 "name": "pt4", 00:24:57.898 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:57.898 "is_configured": true, 00:24:57.898 "data_offset": 2048, 00:24:57.898 "data_size": 63488 00:24:57.898 } 00:24:57.898 ] 00:24:57.898 } 00:24:57.898 } 00:24:57.898 }' 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:57.898 pt2 00:24:57.898 pt3 00:24:57.898 pt4' 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:57.898 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.156 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.156 "name": "pt1", 00:24:58.156 "aliases": [ 00:24:58.156 "00000000-0000-0000-0000-000000000001" 00:24:58.156 ], 00:24:58.156 "product_name": "passthru", 00:24:58.156 "block_size": 512, 00:24:58.156 "num_blocks": 65536, 00:24:58.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:58.156 "assigned_rate_limits": { 00:24:58.156 "rw_ios_per_sec": 0, 00:24:58.156 "rw_mbytes_per_sec": 0, 00:24:58.156 "r_mbytes_per_sec": 0, 00:24:58.156 "w_mbytes_per_sec": 0 00:24:58.156 }, 00:24:58.156 "claimed": true, 00:24:58.156 "claim_type": "exclusive_write", 00:24:58.156 "zoned": false, 00:24:58.156 "supported_io_types": { 00:24:58.156 "read": true, 00:24:58.156 "write": true, 00:24:58.156 "unmap": true, 00:24:58.156 "flush": true, 00:24:58.156 "reset": true, 00:24:58.156 "nvme_admin": false, 00:24:58.156 "nvme_io": false, 00:24:58.156 "nvme_io_md": false, 00:24:58.156 "write_zeroes": true, 00:24:58.156 "zcopy": true, 00:24:58.156 "get_zone_info": false, 00:24:58.156 "zone_management": false, 00:24:58.156 "zone_append": false, 00:24:58.156 "compare": false, 00:24:58.156 "compare_and_write": false, 00:24:58.156 "abort": true, 00:24:58.156 "seek_hole": false, 00:24:58.156 "seek_data": false, 00:24:58.156 "copy": true, 00:24:58.156 "nvme_iov_md": false 00:24:58.156 }, 00:24:58.156 "memory_domains": [ 00:24:58.156 { 00:24:58.156 "dma_device_id": "system", 00:24:58.156 "dma_device_type": 1 00:24:58.156 }, 00:24:58.156 { 00:24:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.156 "dma_device_type": 2 00:24:58.156 } 00:24:58.156 ], 00:24:58.156 "driver_specific": { 00:24:58.156 "passthru": { 00:24:58.156 "name": "pt1", 00:24:58.156 "base_bdev_name": "malloc1" 00:24:58.156 } 00:24:58.156 } 00:24:58.156 }' 00:24:58.156 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.156 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.414 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.672 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.672 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.672 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:58.672 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:58.672 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.930 "name": "pt2", 00:24:58.930 "aliases": [ 00:24:58.930 "00000000-0000-0000-0000-000000000002" 00:24:58.930 ], 00:24:58.930 "product_name": "passthru", 00:24:58.930 "block_size": 512, 00:24:58.930 "num_blocks": 65536, 00:24:58.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:58.930 "assigned_rate_limits": { 00:24:58.930 "rw_ios_per_sec": 0, 00:24:58.930 "rw_mbytes_per_sec": 0, 00:24:58.930 "r_mbytes_per_sec": 0, 00:24:58.930 "w_mbytes_per_sec": 0 00:24:58.930 }, 00:24:58.930 "claimed": true, 00:24:58.930 "claim_type": "exclusive_write", 00:24:58.930 "zoned": false, 00:24:58.930 "supported_io_types": { 00:24:58.930 "read": true, 00:24:58.930 "write": true, 00:24:58.930 "unmap": true, 00:24:58.930 "flush": true, 00:24:58.930 "reset": true, 00:24:58.930 "nvme_admin": false, 00:24:58.930 "nvme_io": false, 00:24:58.930 "nvme_io_md": false, 00:24:58.930 "write_zeroes": true, 00:24:58.930 "zcopy": true, 00:24:58.930 "get_zone_info": false, 00:24:58.930 "zone_management": false, 00:24:58.930 "zone_append": false, 00:24:58.930 "compare": false, 00:24:58.930 "compare_and_write": false, 00:24:58.930 "abort": true, 00:24:58.930 "seek_hole": false, 00:24:58.930 "seek_data": false, 00:24:58.930 "copy": true, 00:24:58.930 "nvme_iov_md": false 00:24:58.930 }, 00:24:58.930 "memory_domains": [ 00:24:58.930 { 00:24:58.930 "dma_device_id": "system", 00:24:58.930 "dma_device_type": 1 00:24:58.930 }, 00:24:58.930 { 00:24:58.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.930 "dma_device_type": 2 00:24:58.930 } 00:24:58.930 ], 00:24:58.930 "driver_specific": { 00:24:58.930 "passthru": { 00:24:58.930 "name": "pt2", 00:24:58.930 "base_bdev_name": "malloc2" 00:24:58.930 } 00:24:58.930 } 00:24:58.930 }' 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.930 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.188 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:59.188 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:59.445 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:59.445 "name": "pt3", 00:24:59.445 "aliases": [ 00:24:59.445 "00000000-0000-0000-0000-000000000003" 00:24:59.445 ], 00:24:59.445 "product_name": "passthru", 00:24:59.445 "block_size": 512, 00:24:59.445 "num_blocks": 65536, 00:24:59.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:59.445 "assigned_rate_limits": { 00:24:59.446 "rw_ios_per_sec": 0, 00:24:59.446 "rw_mbytes_per_sec": 0, 00:24:59.446 "r_mbytes_per_sec": 0, 00:24:59.446 "w_mbytes_per_sec": 0 00:24:59.446 }, 00:24:59.446 "claimed": true, 00:24:59.446 "claim_type": "exclusive_write", 00:24:59.446 "zoned": false, 00:24:59.446 "supported_io_types": { 00:24:59.446 "read": true, 00:24:59.446 "write": true, 00:24:59.446 "unmap": true, 00:24:59.446 "flush": true, 00:24:59.446 "reset": true, 00:24:59.446 "nvme_admin": false, 00:24:59.446 "nvme_io": false, 00:24:59.446 "nvme_io_md": false, 00:24:59.446 "write_zeroes": true, 00:24:59.446 "zcopy": true, 00:24:59.446 "get_zone_info": false, 00:24:59.446 "zone_management": false, 00:24:59.446 "zone_append": false, 00:24:59.446 "compare": false, 00:24:59.446 "compare_and_write": false, 00:24:59.446 "abort": true, 00:24:59.446 "seek_hole": false, 00:24:59.446 "seek_data": false, 00:24:59.446 "copy": true, 00:24:59.446 "nvme_iov_md": false 00:24:59.446 }, 00:24:59.446 "memory_domains": [ 00:24:59.446 { 00:24:59.446 "dma_device_id": "system", 00:24:59.446 "dma_device_type": 1 00:24:59.446 }, 00:24:59.446 { 00:24:59.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.446 "dma_device_type": 2 00:24:59.446 } 00:24:59.446 ], 00:24:59.446 "driver_specific": { 00:24:59.446 "passthru": { 00:24:59.446 "name": "pt3", 00:24:59.446 "base_bdev_name": "malloc3" 00:24:59.446 } 00:24:59.446 } 00:24:59.446 }' 00:24:59.446 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.446 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.446 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:59.446 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.446 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:59.704 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:59.962 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:59.962 "name": "pt4", 00:24:59.962 "aliases": [ 00:24:59.962 "00000000-0000-0000-0000-000000000004" 00:24:59.962 ], 00:24:59.962 "product_name": "passthru", 00:24:59.962 "block_size": 512, 00:24:59.962 "num_blocks": 65536, 00:24:59.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:59.962 "assigned_rate_limits": { 00:24:59.962 "rw_ios_per_sec": 0, 00:24:59.962 "rw_mbytes_per_sec": 0, 00:24:59.962 "r_mbytes_per_sec": 0, 00:24:59.962 "w_mbytes_per_sec": 0 00:24:59.962 }, 00:24:59.962 "claimed": true, 00:24:59.962 "claim_type": "exclusive_write", 00:24:59.962 "zoned": false, 00:24:59.962 "supported_io_types": { 00:24:59.962 "read": true, 00:24:59.962 "write": true, 00:24:59.962 "unmap": true, 00:24:59.962 "flush": true, 00:24:59.962 "reset": true, 00:24:59.962 "nvme_admin": false, 00:24:59.962 "nvme_io": false, 00:24:59.962 "nvme_io_md": false, 00:24:59.962 "write_zeroes": true, 00:24:59.962 "zcopy": true, 00:24:59.962 "get_zone_info": false, 00:24:59.962 "zone_management": false, 00:24:59.962 "zone_append": false, 00:24:59.962 "compare": false, 00:24:59.962 "compare_and_write": false, 00:24:59.962 "abort": true, 00:24:59.962 "seek_hole": false, 00:24:59.962 "seek_data": false, 00:24:59.962 "copy": true, 00:24:59.962 "nvme_iov_md": false 00:24:59.962 }, 00:24:59.962 "memory_domains": [ 00:24:59.962 { 00:24:59.962 "dma_device_id": "system", 00:24:59.962 "dma_device_type": 1 00:24:59.962 }, 00:24:59.962 { 00:24:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.962 "dma_device_type": 2 00:24:59.962 } 00:24:59.962 ], 00:24:59.962 "driver_specific": { 00:24:59.962 "passthru": { 00:24:59.962 "name": "pt4", 00:24:59.962 "base_bdev_name": "malloc4" 00:24:59.962 } 00:24:59.962 } 00:24:59.962 }' 00:24:59.962 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:59.962 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:00.220 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.478 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.478 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.478 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:00.478 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:25:00.737 [2024-07-25 14:07:49.569328] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.737 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=68d17a36-0d16-494f-b2ee-53faf3980135 00:25:00.737 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 68d17a36-0d16-494f-b2ee-53faf3980135 ']' 00:25:00.737 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.995 [2024-07-25 14:07:49.857064] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.995 [2024-07-25 14:07:49.857298] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.995 [2024-07-25 14:07:49.857503] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.995 [2024-07-25 14:07:49.857712] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.995 [2024-07-25 14:07:49.857851] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:25:00.995 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.995 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:25:01.259 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:25:01.259 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:25:01.259 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:01.259 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:01.532 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:01.532 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:01.790 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:01.790 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:02.049 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.049 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:02.307 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:02.307 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.569 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.570 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.570 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.570 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:02.570 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:02.830 [2024-07-25 14:07:51.722249] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:02.830 [2024-07-25 14:07:51.724607] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:02.830 [2024-07-25 14:07:51.724812] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:02.830 [2024-07-25 14:07:51.724903] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:02.830 [2024-07-25 14:07:51.725069] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:02.830 [2024-07-25 14:07:51.725279] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:02.830 [2024-07-25 14:07:51.725443] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:02.830 [2024-07-25 14:07:51.725605] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:02.830 [2024-07-25 14:07:51.725769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:02.830 [2024-07-25 14:07:51.725929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:25:02.830 request: 00:25:02.830 { 00:25:02.830 "name": "raid_bdev1", 00:25:02.830 "raid_level": "raid0", 00:25:02.830 "base_bdevs": [ 00:25:02.830 "malloc1", 00:25:02.830 "malloc2", 00:25:02.830 "malloc3", 00:25:02.830 "malloc4" 00:25:02.830 ], 00:25:02.830 "strip_size_kb": 64, 00:25:02.830 "superblock": false, 00:25:02.830 "method": "bdev_raid_create", 00:25:02.830 "req_id": 1 00:25:02.830 } 00:25:02.830 Got JSON-RPC error response 00:25:02.830 response: 00:25:02.830 { 00:25:02.830 "code": -17, 00:25:02.830 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:02.830 } 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.830 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:25:03.088 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:25:03.088 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:25:03.088 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:03.347 [2024-07-25 14:07:52.278392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:03.347 [2024-07-25 14:07:52.278703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.347 [2024-07-25 14:07:52.278782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:03.347 [2024-07-25 14:07:52.279033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.347 [2024-07-25 14:07:52.281673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.347 [2024-07-25 14:07:52.281857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:03.347 [2024-07-25 14:07:52.282088] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:03.347 [2024-07-25 14:07:52.282249] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:03.347 pt1 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.347 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.605 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.605 "name": "raid_bdev1", 00:25:03.605 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:25:03.605 "strip_size_kb": 64, 00:25:03.605 "state": "configuring", 00:25:03.605 "raid_level": "raid0", 00:25:03.605 "superblock": true, 00:25:03.605 "num_base_bdevs": 4, 00:25:03.605 "num_base_bdevs_discovered": 1, 00:25:03.605 "num_base_bdevs_operational": 4, 00:25:03.605 "base_bdevs_list": [ 00:25:03.605 { 00:25:03.605 "name": "pt1", 00:25:03.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.605 "is_configured": true, 00:25:03.605 "data_offset": 2048, 00:25:03.605 "data_size": 63488 00:25:03.605 }, 00:25:03.605 { 00:25:03.605 "name": null, 00:25:03.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.605 "is_configured": false, 00:25:03.605 "data_offset": 2048, 00:25:03.605 "data_size": 63488 00:25:03.605 }, 00:25:03.605 { 00:25:03.605 "name": null, 00:25:03.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:03.605 "is_configured": false, 00:25:03.605 "data_offset": 2048, 00:25:03.605 "data_size": 63488 00:25:03.605 }, 00:25:03.605 { 00:25:03.605 "name": null, 00:25:03.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:03.605 "is_configured": false, 00:25:03.605 "data_offset": 2048, 00:25:03.605 "data_size": 63488 00:25:03.605 } 00:25:03.605 ] 00:25:03.605 }' 00:25:03.605 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.605 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.171 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:25:04.171 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:04.429 [2024-07-25 14:07:53.470840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:04.429 [2024-07-25 14:07:53.471182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.429 [2024-07-25 14:07:53.471405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:04.429 [2024-07-25 14:07:53.471564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.687 [2024-07-25 14:07:53.472224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.687 [2024-07-25 14:07:53.472390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:04.687 [2024-07-25 14:07:53.472616] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:04.687 [2024-07-25 14:07:53.472746] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:04.687 pt2 00:25:04.687 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:04.945 [2024-07-25 14:07:53.758946] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.945 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.202 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.202 "name": "raid_bdev1", 00:25:05.202 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:25:05.202 "strip_size_kb": 64, 00:25:05.202 "state": "configuring", 00:25:05.202 "raid_level": "raid0", 00:25:05.203 "superblock": true, 00:25:05.203 "num_base_bdevs": 4, 00:25:05.203 "num_base_bdevs_discovered": 1, 00:25:05.203 "num_base_bdevs_operational": 4, 00:25:05.203 "base_bdevs_list": [ 00:25:05.203 { 00:25:05.203 "name": "pt1", 00:25:05.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:05.203 "is_configured": true, 00:25:05.203 "data_offset": 2048, 00:25:05.203 "data_size": 63488 00:25:05.203 }, 00:25:05.203 { 00:25:05.203 "name": null, 00:25:05.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:05.203 "is_configured": false, 00:25:05.203 "data_offset": 2048, 00:25:05.203 "data_size": 63488 00:25:05.203 }, 00:25:05.203 { 00:25:05.203 "name": null, 00:25:05.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:05.203 "is_configured": false, 00:25:05.203 "data_offset": 2048, 00:25:05.203 "data_size": 63488 00:25:05.203 }, 00:25:05.203 { 00:25:05.203 "name": null, 00:25:05.203 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:05.203 "is_configured": false, 00:25:05.203 "data_offset": 2048, 00:25:05.203 "data_size": 63488 00:25:05.203 } 00:25:05.203 ] 00:25:05.203 }' 00:25:05.203 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.203 14:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:25:05.768 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:05.768 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:06.073 [2024-07-25 14:07:54.887174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:06.073 [2024-07-25 14:07:54.887482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.073 [2024-07-25 14:07:54.887643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:06.073 [2024-07-25 14:07:54.887796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.073 [2024-07-25 14:07:54.888492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.073 [2024-07-25 14:07:54.888660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:06.073 [2024-07-25 14:07:54.888878] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:06.073 [2024-07-25 14:07:54.889018] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:06.073 pt2 00:25:06.073 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:06.073 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:06.073 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:06.331 [2024-07-25 14:07:55.127252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:06.331 [2024-07-25 14:07:55.127491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.331 [2024-07-25 14:07:55.127579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:06.331 [2024-07-25 14:07:55.127849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.331 [2024-07-25 14:07:55.128541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.331 [2024-07-25 14:07:55.128709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:06.331 [2024-07-25 14:07:55.128925] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:06.331 [2024-07-25 14:07:55.129043] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:06.331 pt3 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:06.331 [2024-07-25 14:07:55.347257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:06.331 [2024-07-25 14:07:55.347476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.331 [2024-07-25 14:07:55.347553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:06.331 [2024-07-25 14:07:55.347696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.331 [2024-07-25 14:07:55.348262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.331 [2024-07-25 14:07:55.348418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:06.331 [2024-07-25 14:07:55.348628] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:06.331 [2024-07-25 14:07:55.348769] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:06.331 [2024-07-25 14:07:55.349063] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:25:06.331 [2024-07-25 14:07:55.349164] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:06.331 [2024-07-25 14:07:55.349301] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:06.331 [2024-07-25 14:07:55.349715] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:25:06.331 [2024-07-25 14:07:55.349857] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:25:06.331 [2024-07-25 14:07:55.350114] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.331 pt4 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.331 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.589 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.847 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.847 "name": "raid_bdev1", 00:25:06.847 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:25:06.847 "strip_size_kb": 64, 00:25:06.847 "state": "online", 00:25:06.847 "raid_level": "raid0", 00:25:06.847 "superblock": true, 00:25:06.847 "num_base_bdevs": 4, 00:25:06.847 "num_base_bdevs_discovered": 4, 00:25:06.847 "num_base_bdevs_operational": 4, 00:25:06.847 "base_bdevs_list": [ 00:25:06.847 { 00:25:06.847 "name": "pt1", 00:25:06.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:06.847 "is_configured": true, 00:25:06.847 "data_offset": 2048, 00:25:06.847 "data_size": 63488 00:25:06.847 }, 00:25:06.847 { 00:25:06.847 "name": "pt2", 00:25:06.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:06.847 "is_configured": true, 00:25:06.847 "data_offset": 2048, 00:25:06.847 "data_size": 63488 00:25:06.847 }, 00:25:06.847 { 00:25:06.847 "name": "pt3", 00:25:06.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:06.847 "is_configured": true, 00:25:06.847 "data_offset": 2048, 00:25:06.847 "data_size": 63488 00:25:06.847 }, 00:25:06.847 { 00:25:06.847 "name": "pt4", 00:25:06.847 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:06.847 "is_configured": true, 00:25:06.847 "data_offset": 2048, 00:25:06.847 "data_size": 63488 00:25:06.847 } 00:25:06.847 ] 00:25:06.847 }' 00:25:06.847 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.847 14:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:07.413 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:07.413 [2024-07-25 14:07:56.439859] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:07.672 "name": "raid_bdev1", 00:25:07.672 "aliases": [ 00:25:07.672 "68d17a36-0d16-494f-b2ee-53faf3980135" 00:25:07.672 ], 00:25:07.672 "product_name": "Raid Volume", 00:25:07.672 "block_size": 512, 00:25:07.672 "num_blocks": 253952, 00:25:07.672 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:25:07.672 "assigned_rate_limits": { 00:25:07.672 "rw_ios_per_sec": 0, 00:25:07.672 "rw_mbytes_per_sec": 0, 00:25:07.672 "r_mbytes_per_sec": 0, 00:25:07.672 "w_mbytes_per_sec": 0 00:25:07.672 }, 00:25:07.672 "claimed": false, 00:25:07.672 "zoned": false, 00:25:07.672 "supported_io_types": { 00:25:07.672 "read": true, 00:25:07.672 "write": true, 00:25:07.672 "unmap": true, 00:25:07.672 "flush": true, 00:25:07.672 "reset": true, 00:25:07.672 "nvme_admin": false, 00:25:07.672 "nvme_io": false, 00:25:07.672 "nvme_io_md": false, 00:25:07.672 "write_zeroes": true, 00:25:07.672 "zcopy": false, 00:25:07.672 "get_zone_info": false, 00:25:07.672 "zone_management": false, 00:25:07.672 "zone_append": false, 00:25:07.672 "compare": false, 00:25:07.672 "compare_and_write": false, 00:25:07.672 "abort": false, 00:25:07.672 "seek_hole": false, 00:25:07.672 "seek_data": false, 00:25:07.672 "copy": false, 00:25:07.672 "nvme_iov_md": false 00:25:07.672 }, 00:25:07.672 "memory_domains": [ 00:25:07.672 { 00:25:07.672 "dma_device_id": "system", 00:25:07.672 "dma_device_type": 1 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.672 "dma_device_type": 2 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "system", 00:25:07.672 "dma_device_type": 1 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.672 "dma_device_type": 2 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "system", 00:25:07.672 "dma_device_type": 1 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.672 "dma_device_type": 2 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "system", 00:25:07.672 "dma_device_type": 1 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.672 "dma_device_type": 2 00:25:07.672 } 00:25:07.672 ], 00:25:07.672 "driver_specific": { 00:25:07.672 "raid": { 00:25:07.672 "uuid": "68d17a36-0d16-494f-b2ee-53faf3980135", 00:25:07.672 "strip_size_kb": 64, 00:25:07.672 "state": "online", 00:25:07.672 "raid_level": "raid0", 00:25:07.672 "superblock": true, 00:25:07.672 "num_base_bdevs": 4, 00:25:07.672 "num_base_bdevs_discovered": 4, 00:25:07.672 "num_base_bdevs_operational": 4, 00:25:07.672 "base_bdevs_list": [ 00:25:07.672 { 00:25:07.672 "name": "pt1", 00:25:07.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.672 "is_configured": true, 00:25:07.672 "data_offset": 2048, 00:25:07.672 "data_size": 63488 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "name": "pt2", 00:25:07.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:07.672 "is_configured": true, 00:25:07.672 "data_offset": 2048, 00:25:07.672 "data_size": 63488 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "name": "pt3", 00:25:07.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:07.672 "is_configured": true, 00:25:07.672 "data_offset": 2048, 00:25:07.672 "data_size": 63488 00:25:07.672 }, 00:25:07.672 { 00:25:07.672 "name": "pt4", 00:25:07.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:07.672 "is_configured": true, 00:25:07.672 "data_offset": 2048, 00:25:07.672 "data_size": 63488 00:25:07.672 } 00:25:07.672 ] 00:25:07.672 } 00:25:07.672 } 00:25:07.672 }' 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:07.672 pt2 00:25:07.672 pt3 00:25:07.672 pt4' 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:07.672 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:07.930 "name": "pt1", 00:25:07.930 "aliases": [ 00:25:07.930 "00000000-0000-0000-0000-000000000001" 00:25:07.930 ], 00:25:07.930 "product_name": "passthru", 00:25:07.930 "block_size": 512, 00:25:07.930 "num_blocks": 65536, 00:25:07.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.930 "assigned_rate_limits": { 00:25:07.930 "rw_ios_per_sec": 0, 00:25:07.930 "rw_mbytes_per_sec": 0, 00:25:07.930 "r_mbytes_per_sec": 0, 00:25:07.930 "w_mbytes_per_sec": 0 00:25:07.930 }, 00:25:07.930 "claimed": true, 00:25:07.930 "claim_type": "exclusive_write", 00:25:07.930 "zoned": false, 00:25:07.930 "supported_io_types": { 00:25:07.930 "read": true, 00:25:07.930 "write": true, 00:25:07.930 "unmap": true, 00:25:07.930 "flush": true, 00:25:07.930 "reset": true, 00:25:07.930 "nvme_admin": false, 00:25:07.930 "nvme_io": false, 00:25:07.930 "nvme_io_md": false, 00:25:07.930 "write_zeroes": true, 00:25:07.930 "zcopy": true, 00:25:07.930 "get_zone_info": false, 00:25:07.930 "zone_management": false, 00:25:07.930 "zone_append": false, 00:25:07.930 "compare": false, 00:25:07.930 "compare_and_write": false, 00:25:07.930 "abort": true, 00:25:07.930 "seek_hole": false, 00:25:07.930 "seek_data": false, 00:25:07.930 "copy": true, 00:25:07.930 "nvme_iov_md": false 00:25:07.930 }, 00:25:07.930 "memory_domains": [ 00:25:07.930 { 00:25:07.930 "dma_device_id": "system", 00:25:07.930 "dma_device_type": 1 00:25:07.930 }, 00:25:07.930 { 00:25:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.930 "dma_device_type": 2 00:25:07.930 } 00:25:07.930 ], 00:25:07.930 "driver_specific": { 00:25:07.930 "passthru": { 00:25:07.930 "name": "pt1", 00:25:07.930 "base_bdev_name": "malloc1" 00:25:07.930 } 00:25:07.930 } 00:25:07.930 }' 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:07.930 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:08.187 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:08.445 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.445 "name": "pt2", 00:25:08.445 "aliases": [ 00:25:08.445 "00000000-0000-0000-0000-000000000002" 00:25:08.445 ], 00:25:08.445 "product_name": "passthru", 00:25:08.445 "block_size": 512, 00:25:08.445 "num_blocks": 65536, 00:25:08.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:08.445 "assigned_rate_limits": { 00:25:08.445 "rw_ios_per_sec": 0, 00:25:08.445 "rw_mbytes_per_sec": 0, 00:25:08.445 "r_mbytes_per_sec": 0, 00:25:08.445 "w_mbytes_per_sec": 0 00:25:08.445 }, 00:25:08.445 "claimed": true, 00:25:08.445 "claim_type": "exclusive_write", 00:25:08.445 "zoned": false, 00:25:08.445 "supported_io_types": { 00:25:08.445 "read": true, 00:25:08.445 "write": true, 00:25:08.445 "unmap": true, 00:25:08.445 "flush": true, 00:25:08.445 "reset": true, 00:25:08.445 "nvme_admin": false, 00:25:08.445 "nvme_io": false, 00:25:08.445 "nvme_io_md": false, 00:25:08.445 "write_zeroes": true, 00:25:08.445 "zcopy": true, 00:25:08.445 "get_zone_info": false, 00:25:08.445 "zone_management": false, 00:25:08.445 "zone_append": false, 00:25:08.445 "compare": false, 00:25:08.445 "compare_and_write": false, 00:25:08.445 "abort": true, 00:25:08.445 "seek_hole": false, 00:25:08.445 "seek_data": false, 00:25:08.445 "copy": true, 00:25:08.445 "nvme_iov_md": false 00:25:08.445 }, 00:25:08.445 "memory_domains": [ 00:25:08.445 { 00:25:08.445 "dma_device_id": "system", 00:25:08.445 "dma_device_type": 1 00:25:08.445 }, 00:25:08.445 { 00:25:08.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.445 "dma_device_type": 2 00:25:08.445 } 00:25:08.445 ], 00:25:08.445 "driver_specific": { 00:25:08.445 "passthru": { 00:25:08.445 "name": "pt2", 00:25:08.445 "base_bdev_name": "malloc2" 00:25:08.445 } 00:25:08.445 } 00:25:08.445 }' 00:25:08.445 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.445 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:08.703 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:08.961 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:08.961 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:08.961 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:08.961 "name": "pt3", 00:25:08.961 "aliases": [ 00:25:08.961 "00000000-0000-0000-0000-000000000003" 00:25:08.961 ], 00:25:08.961 "product_name": "passthru", 00:25:08.961 "block_size": 512, 00:25:08.961 "num_blocks": 65536, 00:25:08.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:08.961 "assigned_rate_limits": { 00:25:08.961 "rw_ios_per_sec": 0, 00:25:08.961 "rw_mbytes_per_sec": 0, 00:25:08.961 "r_mbytes_per_sec": 0, 00:25:08.961 "w_mbytes_per_sec": 0 00:25:08.961 }, 00:25:08.961 "claimed": true, 00:25:08.961 "claim_type": "exclusive_write", 00:25:08.961 "zoned": false, 00:25:08.961 "supported_io_types": { 00:25:08.961 "read": true, 00:25:08.961 "write": true, 00:25:08.961 "unmap": true, 00:25:08.961 "flush": true, 00:25:08.961 "reset": true, 00:25:08.961 "nvme_admin": false, 00:25:08.961 "nvme_io": false, 00:25:08.961 "nvme_io_md": false, 00:25:08.961 "write_zeroes": true, 00:25:08.961 "zcopy": true, 00:25:08.961 "get_zone_info": false, 00:25:08.961 "zone_management": false, 00:25:08.961 "zone_append": false, 00:25:08.961 "compare": false, 00:25:08.961 "compare_and_write": false, 00:25:08.961 "abort": true, 00:25:08.961 "seek_hole": false, 00:25:08.961 "seek_data": false, 00:25:08.961 "copy": true, 00:25:08.961 "nvme_iov_md": false 00:25:08.961 }, 00:25:08.961 "memory_domains": [ 00:25:08.961 { 00:25:08.961 "dma_device_id": "system", 00:25:08.961 "dma_device_type": 1 00:25:08.961 }, 00:25:08.961 { 00:25:08.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.961 "dma_device_type": 2 00:25:08.961 } 00:25:08.961 ], 00:25:08.961 "driver_specific": { 00:25:08.961 "passthru": { 00:25:08.961 "name": "pt3", 00:25:08.961 "base_bdev_name": "malloc3" 00:25:08.961 } 00:25:08.961 } 00:25:08.961 }' 00:25:08.961 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.219 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:09.477 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:09.735 "name": "pt4", 00:25:09.735 "aliases": [ 00:25:09.735 "00000000-0000-0000-0000-000000000004" 00:25:09.735 ], 00:25:09.735 "product_name": "passthru", 00:25:09.735 "block_size": 512, 00:25:09.735 "num_blocks": 65536, 00:25:09.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:09.735 "assigned_rate_limits": { 00:25:09.735 "rw_ios_per_sec": 0, 00:25:09.735 "rw_mbytes_per_sec": 0, 00:25:09.735 "r_mbytes_per_sec": 0, 00:25:09.735 "w_mbytes_per_sec": 0 00:25:09.735 }, 00:25:09.735 "claimed": true, 00:25:09.735 "claim_type": "exclusive_write", 00:25:09.735 "zoned": false, 00:25:09.735 "supported_io_types": { 00:25:09.735 "read": true, 00:25:09.735 "write": true, 00:25:09.735 "unmap": true, 00:25:09.735 "flush": true, 00:25:09.735 "reset": true, 00:25:09.735 "nvme_admin": false, 00:25:09.735 "nvme_io": false, 00:25:09.735 "nvme_io_md": false, 00:25:09.735 "write_zeroes": true, 00:25:09.735 "zcopy": true, 00:25:09.735 "get_zone_info": false, 00:25:09.735 "zone_management": false, 00:25:09.735 "zone_append": false, 00:25:09.735 "compare": false, 00:25:09.735 "compare_and_write": false, 00:25:09.735 "abort": true, 00:25:09.735 "seek_hole": false, 00:25:09.735 "seek_data": false, 00:25:09.735 "copy": true, 00:25:09.735 "nvme_iov_md": false 00:25:09.735 }, 00:25:09.735 "memory_domains": [ 00:25:09.735 { 00:25:09.735 "dma_device_id": "system", 00:25:09.735 "dma_device_type": 1 00:25:09.735 }, 00:25:09.735 { 00:25:09.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.735 "dma_device_type": 2 00:25:09.735 } 00:25:09.735 ], 00:25:09.735 "driver_specific": { 00:25:09.735 "passthru": { 00:25:09.735 "name": "pt4", 00:25:09.735 "base_bdev_name": "malloc4" 00:25:09.735 } 00:25:09.735 } 00:25:09.735 }' 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.735 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.993 14:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:09.993 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:09.993 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:09.993 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:25:10.252 [2024-07-25 14:07:59.276507] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 68d17a36-0d16-494f-b2ee-53faf3980135 '!=' 68d17a36-0d16-494f-b2ee-53faf3980135 ']' 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 136435 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 136435 ']' 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 136435 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136435 00:25:10.510 killing process with pid 136435 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136435' 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 136435 00:25:10.510 [2024-07-25 14:07:59.321900] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:10.510 [2024-07-25 14:07:59.321992] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.510 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 136435 00:25:10.510 [2024-07-25 14:07:59.322069] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.510 [2024-07-25 14:07:59.322080] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:25:10.769 [2024-07-25 14:07:59.645515] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.142 ************************************ 00:25:12.142 END TEST raid_superblock_test 00:25:12.142 ************************************ 00:25:12.142 14:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:25:12.142 00:25:12.142 real 0m18.568s 00:25:12.142 user 0m33.609s 00:25:12.142 sys 0m2.070s 00:25:12.142 14:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.142 14:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.142 14:08:00 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:25:12.142 14:08:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:12.142 14:08:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.142 14:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:12.142 ************************************ 00:25:12.142 START TEST raid_read_error_test 00:25:12.142 ************************************ 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.b2zKldwpSp 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=136995 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 136995 /var/tmp/spdk-raid.sock 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 136995 ']' 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:12.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.142 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.142 [2024-07-25 14:08:00.894230] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:25:12.142 [2024-07-25 14:08:00.894715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136995 ] 00:25:12.142 [2024-07-25 14:08:01.068775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.398 [2024-07-25 14:08:01.317301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.653 [2024-07-25 14:08:01.522213] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.918 14:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.918 14:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:12.918 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:12.918 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:13.483 BaseBdev1_malloc 00:25:13.483 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:13.483 true 00:25:13.739 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:13.739 [2024-07-25 14:08:02.751447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:13.739 [2024-07-25 14:08:02.751790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.739 [2024-07-25 14:08:02.752011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:13.739 [2024-07-25 14:08:02.752160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.739 [2024-07-25 14:08:02.754871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.739 [2024-07-25 14:08:02.755054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:13.739 BaseBdev1 00:25:13.739 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:13.739 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:14.324 BaseBdev2_malloc 00:25:14.324 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:14.324 true 00:25:14.324 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:14.582 [2024-07-25 14:08:03.551361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:14.582 [2024-07-25 14:08:03.551706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.582 [2024-07-25 14:08:03.551901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:14.582 [2024-07-25 14:08:03.552081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.582 [2024-07-25 14:08:03.554696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.582 [2024-07-25 14:08:03.554878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:14.582 BaseBdev2 00:25:14.582 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:14.582 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:14.839 BaseBdev3_malloc 00:25:14.839 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:15.097 true 00:25:15.097 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:15.355 [2024-07-25 14:08:04.334722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:15.355 [2024-07-25 14:08:04.334991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.355 [2024-07-25 14:08:04.335167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:15.355 [2024-07-25 14:08:04.335333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.355 [2024-07-25 14:08:04.338096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.355 [2024-07-25 14:08:04.338287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:15.355 BaseBdev3 00:25:15.355 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:15.355 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:15.612 BaseBdev4_malloc 00:25:15.612 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:15.882 true 00:25:15.882 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:16.139 [2024-07-25 14:08:05.106225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:16.139 [2024-07-25 14:08:05.106580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.139 [2024-07-25 14:08:05.106784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:16.139 [2024-07-25 14:08:05.106943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.139 [2024-07-25 14:08:05.109672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.139 [2024-07-25 14:08:05.109876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:16.139 BaseBdev4 00:25:16.139 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:16.397 [2024-07-25 14:08:05.346566] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.397 [2024-07-25 14:08:05.349112] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:16.397 [2024-07-25 14:08:05.349385] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:16.397 [2024-07-25 14:08:05.349588] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:16.397 [2024-07-25 14:08:05.350060] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:25:16.397 [2024-07-25 14:08:05.350214] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:16.397 [2024-07-25 14:08:05.350455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:16.397 [2024-07-25 14:08:05.351043] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:25:16.397 [2024-07-25 14:08:05.351192] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:25:16.397 [2024-07-25 14:08:05.351536] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.397 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.655 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.655 "name": "raid_bdev1", 00:25:16.655 "uuid": "ff9ac054-aa96-4542-8f0f-0f0a05063a69", 00:25:16.655 "strip_size_kb": 64, 00:25:16.655 "state": "online", 00:25:16.655 "raid_level": "raid0", 00:25:16.655 "superblock": true, 00:25:16.655 "num_base_bdevs": 4, 00:25:16.655 "num_base_bdevs_discovered": 4, 00:25:16.655 "num_base_bdevs_operational": 4, 00:25:16.655 "base_bdevs_list": [ 00:25:16.655 { 00:25:16.655 "name": "BaseBdev1", 00:25:16.655 "uuid": "6270cb13-c2d9-55f6-8f4b-f1470c855ead", 00:25:16.655 "is_configured": true, 00:25:16.655 "data_offset": 2048, 00:25:16.655 "data_size": 63488 00:25:16.655 }, 00:25:16.655 { 00:25:16.655 "name": "BaseBdev2", 00:25:16.655 "uuid": "197684f0-465d-54c1-8796-15c68f32b116", 00:25:16.655 "is_configured": true, 00:25:16.655 "data_offset": 2048, 00:25:16.655 "data_size": 63488 00:25:16.655 }, 00:25:16.655 { 00:25:16.655 "name": "BaseBdev3", 00:25:16.655 "uuid": "0194906e-be2d-52f3-acdb-d16f5f6c7651", 00:25:16.655 "is_configured": true, 00:25:16.655 "data_offset": 2048, 00:25:16.655 "data_size": 63488 00:25:16.655 }, 00:25:16.655 { 00:25:16.655 "name": "BaseBdev4", 00:25:16.655 "uuid": "26bde68e-ba20-50ed-817e-a3eed8a68b53", 00:25:16.655 "is_configured": true, 00:25:16.655 "data_offset": 2048, 00:25:16.655 "data_size": 63488 00:25:16.655 } 00:25:16.655 ] 00:25:16.655 }' 00:25:16.655 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.655 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.588 14:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:25:17.588 14:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:17.588 [2024-07-25 14:08:06.357030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=4 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.520 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.777 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.777 "name": "raid_bdev1", 00:25:18.777 "uuid": "ff9ac054-aa96-4542-8f0f-0f0a05063a69", 00:25:18.777 "strip_size_kb": 64, 00:25:18.777 "state": "online", 00:25:18.777 "raid_level": "raid0", 00:25:18.777 "superblock": true, 00:25:18.777 "num_base_bdevs": 4, 00:25:18.777 "num_base_bdevs_discovered": 4, 00:25:18.777 "num_base_bdevs_operational": 4, 00:25:18.777 "base_bdevs_list": [ 00:25:18.777 { 00:25:18.777 "name": "BaseBdev1", 00:25:18.777 "uuid": "6270cb13-c2d9-55f6-8f4b-f1470c855ead", 00:25:18.777 "is_configured": true, 00:25:18.777 "data_offset": 2048, 00:25:18.777 "data_size": 63488 00:25:18.777 }, 00:25:18.777 { 00:25:18.777 "name": "BaseBdev2", 00:25:18.777 "uuid": "197684f0-465d-54c1-8796-15c68f32b116", 00:25:18.777 "is_configured": true, 00:25:18.777 "data_offset": 2048, 00:25:18.777 "data_size": 63488 00:25:18.777 }, 00:25:18.777 { 00:25:18.777 "name": "BaseBdev3", 00:25:18.777 "uuid": "0194906e-be2d-52f3-acdb-d16f5f6c7651", 00:25:18.777 "is_configured": true, 00:25:18.777 "data_offset": 2048, 00:25:18.777 "data_size": 63488 00:25:18.777 }, 00:25:18.777 { 00:25:18.777 "name": "BaseBdev4", 00:25:18.777 "uuid": "26bde68e-ba20-50ed-817e-a3eed8a68b53", 00:25:18.777 "is_configured": true, 00:25:18.777 "data_offset": 2048, 00:25:18.777 "data_size": 63488 00:25:18.777 } 00:25:18.777 ] 00:25:18.777 }' 00:25:18.777 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.777 14:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.711 14:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:19.711 [2024-07-25 14:08:08.681100] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:19.711 [2024-07-25 14:08:08.681429] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:19.711 [2024-07-25 14:08:08.684619] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.711 [2024-07-25 14:08:08.684861] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.711 [2024-07-25 14:08:08.684958] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:19.711 [2024-07-25 14:08:08.685165] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:25:19.711 0 00:25:19.711 14:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 136995 00:25:19.711 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 136995 ']' 00:25:19.711 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 136995 00:25:19.711 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136995 00:25:19.712 killing process with pid 136995 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136995' 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 136995 00:25:19.712 14:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 136995 00:25:19.712 [2024-07-25 14:08:08.716803] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:19.970 [2024-07-25 14:08:09.011279] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.b2zKldwpSp 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.43 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.43 != \0\.\0\0 ]] 00:25:21.345 ************************************ 00:25:21.345 END TEST raid_read_error_test 00:25:21.345 ************************************ 00:25:21.345 00:25:21.345 real 0m9.417s 00:25:21.345 user 0m14.695s 00:25:21.345 sys 0m1.017s 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.345 14:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.346 14:08:10 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:25:21.346 14:08:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:21.346 14:08:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.346 14:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:21.346 ************************************ 00:25:21.346 START TEST raid_write_error_test 00:25:21.346 ************************************ 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid0 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid0 '!=' raid1 ']' 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.AXVvSFqfEB 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=137212 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 137212 /var/tmp/spdk-raid.sock 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 137212 ']' 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:21.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.346 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 [2024-07-25 14:08:10.388915] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:25:21.604 [2024-07-25 14:08:10.389310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137212 ] 00:25:21.604 [2024-07-25 14:08:10.562396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.861 [2024-07-25 14:08:10.807960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.119 [2024-07-25 14:08:11.034937] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.377 14:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:22.377 14:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:22.377 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:22.377 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:22.635 BaseBdev1_malloc 00:25:22.635 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:22.893 true 00:25:22.893 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:23.150 [2024-07-25 14:08:12.133159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:23.150 [2024-07-25 14:08:12.133457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.150 [2024-07-25 14:08:12.133661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:23.150 [2024-07-25 14:08:12.133829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.150 [2024-07-25 14:08:12.136618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.150 [2024-07-25 14:08:12.136790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.150 BaseBdev1 00:25:23.150 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:23.150 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:23.714 BaseBdev2_malloc 00:25:23.715 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:23.715 true 00:25:23.972 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:24.231 [2024-07-25 14:08:13.046808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:24.231 [2024-07-25 14:08:13.047165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.231 [2024-07-25 14:08:13.047343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:24.231 [2024-07-25 14:08:13.047502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.231 [2024-07-25 14:08:13.050218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.231 [2024-07-25 14:08:13.050406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:24.231 BaseBdev2 00:25:24.231 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:24.231 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:24.539 BaseBdev3_malloc 00:25:24.540 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:24.798 true 00:25:24.798 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:25.056 [2024-07-25 14:08:14.054547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:25.056 [2024-07-25 14:08:14.054915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.056 [2024-07-25 14:08:14.055005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:25.056 [2024-07-25 14:08:14.055233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.056 [2024-07-25 14:08:14.057932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.056 [2024-07-25 14:08:14.058130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:25.056 BaseBdev3 00:25:25.056 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:25:25.056 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:25.622 BaseBdev4_malloc 00:25:25.622 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:25.880 true 00:25:25.880 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:26.137 [2024-07-25 14:08:15.010432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:26.137 [2024-07-25 14:08:15.010761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.137 [2024-07-25 14:08:15.010956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:26.137 [2024-07-25 14:08:15.011106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.137 [2024-07-25 14:08:15.013886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.137 [2024-07-25 14:08:15.014069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:26.137 BaseBdev4 00:25:26.137 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:26.395 [2024-07-25 14:08:15.282558] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.395 [2024-07-25 14:08:15.285025] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.395 [2024-07-25 14:08:15.285279] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.395 [2024-07-25 14:08:15.285521] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:26.395 [2024-07-25 14:08:15.285855] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:25:26.395 [2024-07-25 14:08:15.285992] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:26.395 [2024-07-25 14:08:15.286184] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:26.395 [2024-07-25 14:08:15.286787] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:25:26.395 [2024-07-25 14:08:15.286921] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:25:26.395 [2024-07-25 14:08:15.287271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.395 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.653 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.653 "name": "raid_bdev1", 00:25:26.653 "uuid": "5e2fc6e4-45c3-47e1-bba1-a5897c8bfbc9", 00:25:26.653 "strip_size_kb": 64, 00:25:26.653 "state": "online", 00:25:26.653 "raid_level": "raid0", 00:25:26.653 "superblock": true, 00:25:26.653 "num_base_bdevs": 4, 00:25:26.653 "num_base_bdevs_discovered": 4, 00:25:26.653 "num_base_bdevs_operational": 4, 00:25:26.653 "base_bdevs_list": [ 00:25:26.653 { 00:25:26.653 "name": "BaseBdev1", 00:25:26.653 "uuid": "8df15033-db81-553b-a60d-78dcdd0da622", 00:25:26.653 "is_configured": true, 00:25:26.653 "data_offset": 2048, 00:25:26.653 "data_size": 63488 00:25:26.653 }, 00:25:26.653 { 00:25:26.653 "name": "BaseBdev2", 00:25:26.653 "uuid": "d3deea57-a138-5da0-8843-7fb0ea6e75eb", 00:25:26.653 "is_configured": true, 00:25:26.653 "data_offset": 2048, 00:25:26.653 "data_size": 63488 00:25:26.653 }, 00:25:26.653 { 00:25:26.653 "name": "BaseBdev3", 00:25:26.653 "uuid": "de09e00c-0356-5060-a632-7a65cb9cd033", 00:25:26.653 "is_configured": true, 00:25:26.653 "data_offset": 2048, 00:25:26.653 "data_size": 63488 00:25:26.653 }, 00:25:26.653 { 00:25:26.653 "name": "BaseBdev4", 00:25:26.653 "uuid": "bc4b540e-0fdf-5b6a-bb07-011360514822", 00:25:26.653 "is_configured": true, 00:25:26.653 "data_offset": 2048, 00:25:26.653 "data_size": 63488 00:25:26.653 } 00:25:26.653 ] 00:25:26.653 }' 00:25:26.653 14:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.653 14:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.587 14:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:25:27.587 14:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:27.587 [2024-07-25 14:08:16.392818] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:28.520 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=4 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.778 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.035 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.035 "name": "raid_bdev1", 00:25:29.035 "uuid": "5e2fc6e4-45c3-47e1-bba1-a5897c8bfbc9", 00:25:29.035 "strip_size_kb": 64, 00:25:29.035 "state": "online", 00:25:29.035 "raid_level": "raid0", 00:25:29.035 "superblock": true, 00:25:29.035 "num_base_bdevs": 4, 00:25:29.035 "num_base_bdevs_discovered": 4, 00:25:29.035 "num_base_bdevs_operational": 4, 00:25:29.035 "base_bdevs_list": [ 00:25:29.035 { 00:25:29.035 "name": "BaseBdev1", 00:25:29.035 "uuid": "8df15033-db81-553b-a60d-78dcdd0da622", 00:25:29.035 "is_configured": true, 00:25:29.035 "data_offset": 2048, 00:25:29.035 "data_size": 63488 00:25:29.035 }, 00:25:29.035 { 00:25:29.035 "name": "BaseBdev2", 00:25:29.035 "uuid": "d3deea57-a138-5da0-8843-7fb0ea6e75eb", 00:25:29.035 "is_configured": true, 00:25:29.035 "data_offset": 2048, 00:25:29.035 "data_size": 63488 00:25:29.035 }, 00:25:29.035 { 00:25:29.035 "name": "BaseBdev3", 00:25:29.035 "uuid": "de09e00c-0356-5060-a632-7a65cb9cd033", 00:25:29.035 "is_configured": true, 00:25:29.035 "data_offset": 2048, 00:25:29.035 "data_size": 63488 00:25:29.035 }, 00:25:29.035 { 00:25:29.035 "name": "BaseBdev4", 00:25:29.035 "uuid": "bc4b540e-0fdf-5b6a-bb07-011360514822", 00:25:29.035 "is_configured": true, 00:25:29.035 "data_offset": 2048, 00:25:29.035 "data_size": 63488 00:25:29.035 } 00:25:29.035 ] 00:25:29.035 }' 00:25:29.035 14:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.035 14:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.600 14:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:30.165 [2024-07-25 14:08:18.906020] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:30.165 [2024-07-25 14:08:18.906317] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:30.165 [2024-07-25 14:08:18.909697] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.165 [2024-07-25 14:08:18.909907] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.165 [2024-07-25 14:08:18.910121] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.165 [2024-07-25 14:08:18.910247] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:25:30.165 0 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 137212 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 137212 ']' 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 137212 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137212 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137212' 00:25:30.165 killing process with pid 137212 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 137212 00:25:30.165 14:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 137212 00:25:30.165 [2024-07-25 14:08:18.948891] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.422 [2024-07-25 14:08:19.225459] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.AXVvSFqfEB 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.40 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid0 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.40 != \0\.\0\0 ]] 00:25:31.794 00:25:31.794 real 0m10.140s 00:25:31.794 user 0m16.031s 00:25:31.794 sys 0m1.147s 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.794 14:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 ************************************ 00:25:31.794 END TEST raid_write_error_test 00:25:31.794 ************************************ 00:25:31.794 14:08:20 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:25:31.794 14:08:20 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:25:31.794 14:08:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:31.794 14:08:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.794 14:08:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 ************************************ 00:25:31.794 START TEST raid_state_function_test 00:25:31.794 ************************************ 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=137439 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 137439' 00:25:31.794 Process raid pid: 137439 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 137439 /var/tmp/spdk-raid.sock 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 137439 ']' 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:31.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:31.794 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.794 [2024-07-25 14:08:20.573249] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:25:31.794 [2024-07-25 14:08:20.573729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.794 [2024-07-25 14:08:20.748432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.051 [2024-07-25 14:08:21.004588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.309 [2024-07-25 14:08:21.205369] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:32.567 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.567 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:32.567 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:32.824 [2024-07-25 14:08:21.802003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:32.824 [2024-07-25 14:08:21.802346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:32.824 [2024-07-25 14:08:21.802474] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:32.824 [2024-07-25 14:08:21.802544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:32.824 [2024-07-25 14:08:21.802670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:32.824 [2024-07-25 14:08:21.802734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:32.824 [2024-07-25 14:08:21.802769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:32.824 [2024-07-25 14:08:21.802886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.824 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.082 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.082 "name": "Existed_Raid", 00:25:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.082 "strip_size_kb": 64, 00:25:33.082 "state": "configuring", 00:25:33.082 "raid_level": "concat", 00:25:33.082 "superblock": false, 00:25:33.082 "num_base_bdevs": 4, 00:25:33.082 "num_base_bdevs_discovered": 0, 00:25:33.082 "num_base_bdevs_operational": 4, 00:25:33.082 "base_bdevs_list": [ 00:25:33.082 { 00:25:33.082 "name": "BaseBdev1", 00:25:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.082 "is_configured": false, 00:25:33.082 "data_offset": 0, 00:25:33.082 "data_size": 0 00:25:33.082 }, 00:25:33.082 { 00:25:33.082 "name": "BaseBdev2", 00:25:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.082 "is_configured": false, 00:25:33.082 "data_offset": 0, 00:25:33.082 "data_size": 0 00:25:33.082 }, 00:25:33.082 { 00:25:33.082 "name": "BaseBdev3", 00:25:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.082 "is_configured": false, 00:25:33.082 "data_offset": 0, 00:25:33.082 "data_size": 0 00:25:33.082 }, 00:25:33.082 { 00:25:33.082 "name": "BaseBdev4", 00:25:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.082 "is_configured": false, 00:25:33.082 "data_offset": 0, 00:25:33.082 "data_size": 0 00:25:33.082 } 00:25:33.082 ] 00:25:33.082 }' 00:25:33.082 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.082 14:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.020 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:34.278 [2024-07-25 14:08:23.102231] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:34.278 [2024-07-25 14:08:23.102525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:25:34.278 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:34.534 [2024-07-25 14:08:23.334668] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:34.534 [2024-07-25 14:08:23.335078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:34.534 [2024-07-25 14:08:23.335199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:34.534 [2024-07-25 14:08:23.335299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:34.534 [2024-07-25 14:08:23.335507] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:34.534 [2024-07-25 14:08:23.335590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:34.534 [2024-07-25 14:08:23.335738] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:34.534 [2024-07-25 14:08:23.335904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:34.534 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:34.792 [2024-07-25 14:08:23.638473] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:34.792 BaseBdev1 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:34.792 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:35.050 14:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:35.308 [ 00:25:35.308 { 00:25:35.308 "name": "BaseBdev1", 00:25:35.308 "aliases": [ 00:25:35.308 "d13b57e9-8def-4f79-8268-7f954dbf7f68" 00:25:35.308 ], 00:25:35.308 "product_name": "Malloc disk", 00:25:35.308 "block_size": 512, 00:25:35.308 "num_blocks": 65536, 00:25:35.308 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:35.308 "assigned_rate_limits": { 00:25:35.308 "rw_ios_per_sec": 0, 00:25:35.308 "rw_mbytes_per_sec": 0, 00:25:35.308 "r_mbytes_per_sec": 0, 00:25:35.308 "w_mbytes_per_sec": 0 00:25:35.308 }, 00:25:35.308 "claimed": true, 00:25:35.308 "claim_type": "exclusive_write", 00:25:35.308 "zoned": false, 00:25:35.308 "supported_io_types": { 00:25:35.308 "read": true, 00:25:35.308 "write": true, 00:25:35.308 "unmap": true, 00:25:35.308 "flush": true, 00:25:35.308 "reset": true, 00:25:35.308 "nvme_admin": false, 00:25:35.308 "nvme_io": false, 00:25:35.308 "nvme_io_md": false, 00:25:35.308 "write_zeroes": true, 00:25:35.308 "zcopy": true, 00:25:35.308 "get_zone_info": false, 00:25:35.308 "zone_management": false, 00:25:35.308 "zone_append": false, 00:25:35.308 "compare": false, 00:25:35.308 "compare_and_write": false, 00:25:35.308 "abort": true, 00:25:35.308 "seek_hole": false, 00:25:35.308 "seek_data": false, 00:25:35.308 "copy": true, 00:25:35.308 "nvme_iov_md": false 00:25:35.308 }, 00:25:35.308 "memory_domains": [ 00:25:35.308 { 00:25:35.308 "dma_device_id": "system", 00:25:35.308 "dma_device_type": 1 00:25:35.308 }, 00:25:35.308 { 00:25:35.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.308 "dma_device_type": 2 00:25:35.308 } 00:25:35.308 ], 00:25:35.308 "driver_specific": {} 00:25:35.308 } 00:25:35.308 ] 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.308 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.566 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.566 "name": "Existed_Raid", 00:25:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.566 "strip_size_kb": 64, 00:25:35.566 "state": "configuring", 00:25:35.566 "raid_level": "concat", 00:25:35.566 "superblock": false, 00:25:35.566 "num_base_bdevs": 4, 00:25:35.566 "num_base_bdevs_discovered": 1, 00:25:35.566 "num_base_bdevs_operational": 4, 00:25:35.566 "base_bdevs_list": [ 00:25:35.566 { 00:25:35.566 "name": "BaseBdev1", 00:25:35.566 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:35.566 "is_configured": true, 00:25:35.566 "data_offset": 0, 00:25:35.566 "data_size": 65536 00:25:35.566 }, 00:25:35.566 { 00:25:35.566 "name": "BaseBdev2", 00:25:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.566 "is_configured": false, 00:25:35.566 "data_offset": 0, 00:25:35.566 "data_size": 0 00:25:35.566 }, 00:25:35.566 { 00:25:35.566 "name": "BaseBdev3", 00:25:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.566 "is_configured": false, 00:25:35.566 "data_offset": 0, 00:25:35.566 "data_size": 0 00:25:35.566 }, 00:25:35.566 { 00:25:35.566 "name": "BaseBdev4", 00:25:35.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.566 "is_configured": false, 00:25:35.566 "data_offset": 0, 00:25:35.566 "data_size": 0 00:25:35.566 } 00:25:35.566 ] 00:25:35.566 }' 00:25:35.566 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.566 14:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.132 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:36.390 [2024-07-25 14:08:25.370920] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:36.390 [2024-07-25 14:08:25.371268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:25:36.390 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:36.647 [2024-07-25 14:08:25.642993] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.647 [2024-07-25 14:08:25.645325] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:36.647 [2024-07-25 14:08:25.645510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:36.647 [2024-07-25 14:08:25.645629] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:36.647 [2024-07-25 14:08:25.645698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:36.647 [2024-07-25 14:08:25.645808] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:36.647 [2024-07-25 14:08:25.645965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.647 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.905 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.905 "name": "Existed_Raid", 00:25:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.905 "strip_size_kb": 64, 00:25:36.905 "state": "configuring", 00:25:36.905 "raid_level": "concat", 00:25:36.905 "superblock": false, 00:25:36.905 "num_base_bdevs": 4, 00:25:36.905 "num_base_bdevs_discovered": 1, 00:25:36.905 "num_base_bdevs_operational": 4, 00:25:36.905 "base_bdevs_list": [ 00:25:36.905 { 00:25:36.905 "name": "BaseBdev1", 00:25:36.905 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:36.905 "is_configured": true, 00:25:36.905 "data_offset": 0, 00:25:36.905 "data_size": 65536 00:25:36.905 }, 00:25:36.905 { 00:25:36.905 "name": "BaseBdev2", 00:25:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.905 "is_configured": false, 00:25:36.905 "data_offset": 0, 00:25:36.905 "data_size": 0 00:25:36.905 }, 00:25:36.905 { 00:25:36.905 "name": "BaseBdev3", 00:25:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.905 "is_configured": false, 00:25:36.905 "data_offset": 0, 00:25:36.905 "data_size": 0 00:25:36.905 }, 00:25:36.905 { 00:25:36.905 "name": "BaseBdev4", 00:25:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.905 "is_configured": false, 00:25:36.905 "data_offset": 0, 00:25:36.905 "data_size": 0 00:25:36.905 } 00:25:36.905 ] 00:25:36.905 }' 00:25:36.905 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.905 14:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.837 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:38.095 [2024-07-25 14:08:26.884482] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:38.095 BaseBdev2 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:38.095 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:38.354 [ 00:25:38.354 { 00:25:38.354 "name": "BaseBdev2", 00:25:38.354 "aliases": [ 00:25:38.354 "be94652c-08c2-41a6-acb2-93eb60e51164" 00:25:38.354 ], 00:25:38.354 "product_name": "Malloc disk", 00:25:38.354 "block_size": 512, 00:25:38.354 "num_blocks": 65536, 00:25:38.354 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:38.354 "assigned_rate_limits": { 00:25:38.354 "rw_ios_per_sec": 0, 00:25:38.354 "rw_mbytes_per_sec": 0, 00:25:38.354 "r_mbytes_per_sec": 0, 00:25:38.354 "w_mbytes_per_sec": 0 00:25:38.354 }, 00:25:38.354 "claimed": true, 00:25:38.354 "claim_type": "exclusive_write", 00:25:38.354 "zoned": false, 00:25:38.354 "supported_io_types": { 00:25:38.354 "read": true, 00:25:38.354 "write": true, 00:25:38.354 "unmap": true, 00:25:38.354 "flush": true, 00:25:38.354 "reset": true, 00:25:38.354 "nvme_admin": false, 00:25:38.354 "nvme_io": false, 00:25:38.354 "nvme_io_md": false, 00:25:38.354 "write_zeroes": true, 00:25:38.354 "zcopy": true, 00:25:38.354 "get_zone_info": false, 00:25:38.354 "zone_management": false, 00:25:38.354 "zone_append": false, 00:25:38.354 "compare": false, 00:25:38.354 "compare_and_write": false, 00:25:38.354 "abort": true, 00:25:38.354 "seek_hole": false, 00:25:38.354 "seek_data": false, 00:25:38.354 "copy": true, 00:25:38.354 "nvme_iov_md": false 00:25:38.354 }, 00:25:38.354 "memory_domains": [ 00:25:38.354 { 00:25:38.354 "dma_device_id": "system", 00:25:38.354 "dma_device_type": 1 00:25:38.354 }, 00:25:38.354 { 00:25:38.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.354 "dma_device_type": 2 00:25:38.354 } 00:25:38.354 ], 00:25:38.354 "driver_specific": {} 00:25:38.354 } 00:25:38.354 ] 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.354 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.920 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.920 "name": "Existed_Raid", 00:25:38.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.920 "strip_size_kb": 64, 00:25:38.920 "state": "configuring", 00:25:38.920 "raid_level": "concat", 00:25:38.920 "superblock": false, 00:25:38.920 "num_base_bdevs": 4, 00:25:38.920 "num_base_bdevs_discovered": 2, 00:25:38.920 "num_base_bdevs_operational": 4, 00:25:38.920 "base_bdevs_list": [ 00:25:38.920 { 00:25:38.920 "name": "BaseBdev1", 00:25:38.920 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:38.920 "is_configured": true, 00:25:38.920 "data_offset": 0, 00:25:38.920 "data_size": 65536 00:25:38.920 }, 00:25:38.920 { 00:25:38.920 "name": "BaseBdev2", 00:25:38.920 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:38.920 "is_configured": true, 00:25:38.920 "data_offset": 0, 00:25:38.920 "data_size": 65536 00:25:38.920 }, 00:25:38.920 { 00:25:38.920 "name": "BaseBdev3", 00:25:38.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.920 "is_configured": false, 00:25:38.920 "data_offset": 0, 00:25:38.920 "data_size": 0 00:25:38.920 }, 00:25:38.920 { 00:25:38.920 "name": "BaseBdev4", 00:25:38.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.920 "is_configured": false, 00:25:38.920 "data_offset": 0, 00:25:38.920 "data_size": 0 00:25:38.920 } 00:25:38.920 ] 00:25:38.920 }' 00:25:38.920 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.920 14:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.485 14:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:39.743 [2024-07-25 14:08:28.650415] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:39.743 BaseBdev3 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:39.743 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.001 14:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:40.259 [ 00:25:40.259 { 00:25:40.259 "name": "BaseBdev3", 00:25:40.259 "aliases": [ 00:25:40.259 "0b2daf0f-dab3-4eb3-862d-f574a757fdb2" 00:25:40.259 ], 00:25:40.259 "product_name": "Malloc disk", 00:25:40.259 "block_size": 512, 00:25:40.259 "num_blocks": 65536, 00:25:40.259 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:40.259 "assigned_rate_limits": { 00:25:40.259 "rw_ios_per_sec": 0, 00:25:40.259 "rw_mbytes_per_sec": 0, 00:25:40.259 "r_mbytes_per_sec": 0, 00:25:40.259 "w_mbytes_per_sec": 0 00:25:40.259 }, 00:25:40.259 "claimed": true, 00:25:40.259 "claim_type": "exclusive_write", 00:25:40.260 "zoned": false, 00:25:40.260 "supported_io_types": { 00:25:40.260 "read": true, 00:25:40.260 "write": true, 00:25:40.260 "unmap": true, 00:25:40.260 "flush": true, 00:25:40.260 "reset": true, 00:25:40.260 "nvme_admin": false, 00:25:40.260 "nvme_io": false, 00:25:40.260 "nvme_io_md": false, 00:25:40.260 "write_zeroes": true, 00:25:40.260 "zcopy": true, 00:25:40.260 "get_zone_info": false, 00:25:40.260 "zone_management": false, 00:25:40.260 "zone_append": false, 00:25:40.260 "compare": false, 00:25:40.260 "compare_and_write": false, 00:25:40.260 "abort": true, 00:25:40.260 "seek_hole": false, 00:25:40.260 "seek_data": false, 00:25:40.260 "copy": true, 00:25:40.260 "nvme_iov_md": false 00:25:40.260 }, 00:25:40.260 "memory_domains": [ 00:25:40.260 { 00:25:40.260 "dma_device_id": "system", 00:25:40.260 "dma_device_type": 1 00:25:40.260 }, 00:25:40.260 { 00:25:40.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.260 "dma_device_type": 2 00:25:40.260 } 00:25:40.260 ], 00:25:40.260 "driver_specific": {} 00:25:40.260 } 00:25:40.260 ] 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.260 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.518 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.518 "name": "Existed_Raid", 00:25:40.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.518 "strip_size_kb": 64, 00:25:40.518 "state": "configuring", 00:25:40.518 "raid_level": "concat", 00:25:40.518 "superblock": false, 00:25:40.518 "num_base_bdevs": 4, 00:25:40.518 "num_base_bdevs_discovered": 3, 00:25:40.518 "num_base_bdevs_operational": 4, 00:25:40.518 "base_bdevs_list": [ 00:25:40.518 { 00:25:40.518 "name": "BaseBdev1", 00:25:40.518 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:40.518 "is_configured": true, 00:25:40.518 "data_offset": 0, 00:25:40.518 "data_size": 65536 00:25:40.518 }, 00:25:40.518 { 00:25:40.518 "name": "BaseBdev2", 00:25:40.518 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:40.518 "is_configured": true, 00:25:40.518 "data_offset": 0, 00:25:40.518 "data_size": 65536 00:25:40.518 }, 00:25:40.518 { 00:25:40.518 "name": "BaseBdev3", 00:25:40.518 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:40.518 "is_configured": true, 00:25:40.518 "data_offset": 0, 00:25:40.518 "data_size": 65536 00:25:40.518 }, 00:25:40.518 { 00:25:40.518 "name": "BaseBdev4", 00:25:40.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.518 "is_configured": false, 00:25:40.518 "data_offset": 0, 00:25:40.518 "data_size": 0 00:25:40.518 } 00:25:40.518 ] 00:25:40.518 }' 00:25:40.518 14:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.518 14:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:41.452 [2024-07-25 14:08:30.413595] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:41.452 [2024-07-25 14:08:30.413872] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:25:41.452 [2024-07-25 14:08:30.413924] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:41.452 [2024-07-25 14:08:30.414165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:41.452 [2024-07-25 14:08:30.414682] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:25:41.452 [2024-07-25 14:08:30.414806] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:25:41.452 [2024-07-25 14:08:30.415190] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.452 BaseBdev4 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:41.452 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:41.709 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:41.967 [ 00:25:41.967 { 00:25:41.967 "name": "BaseBdev4", 00:25:41.967 "aliases": [ 00:25:41.967 "51924ac4-b635-40e5-8b47-f04fac72855c" 00:25:41.967 ], 00:25:41.967 "product_name": "Malloc disk", 00:25:41.967 "block_size": 512, 00:25:41.967 "num_blocks": 65536, 00:25:41.967 "uuid": "51924ac4-b635-40e5-8b47-f04fac72855c", 00:25:41.967 "assigned_rate_limits": { 00:25:41.967 "rw_ios_per_sec": 0, 00:25:41.967 "rw_mbytes_per_sec": 0, 00:25:41.967 "r_mbytes_per_sec": 0, 00:25:41.967 "w_mbytes_per_sec": 0 00:25:41.967 }, 00:25:41.967 "claimed": true, 00:25:41.967 "claim_type": "exclusive_write", 00:25:41.967 "zoned": false, 00:25:41.967 "supported_io_types": { 00:25:41.967 "read": true, 00:25:41.967 "write": true, 00:25:41.967 "unmap": true, 00:25:41.967 "flush": true, 00:25:41.967 "reset": true, 00:25:41.967 "nvme_admin": false, 00:25:41.967 "nvme_io": false, 00:25:41.967 "nvme_io_md": false, 00:25:41.967 "write_zeroes": true, 00:25:41.967 "zcopy": true, 00:25:41.967 "get_zone_info": false, 00:25:41.967 "zone_management": false, 00:25:41.967 "zone_append": false, 00:25:41.967 "compare": false, 00:25:41.967 "compare_and_write": false, 00:25:41.967 "abort": true, 00:25:41.967 "seek_hole": false, 00:25:41.967 "seek_data": false, 00:25:41.967 "copy": true, 00:25:41.967 "nvme_iov_md": false 00:25:41.967 }, 00:25:41.967 "memory_domains": [ 00:25:41.967 { 00:25:41.967 "dma_device_id": "system", 00:25:41.967 "dma_device_type": 1 00:25:41.967 }, 00:25:41.967 { 00:25:41.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.967 "dma_device_type": 2 00:25:41.967 } 00:25:41.967 ], 00:25:41.967 "driver_specific": {} 00:25:41.967 } 00:25:41.967 ] 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.967 14:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.225 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.225 "name": "Existed_Raid", 00:25:42.225 "uuid": "9d085931-ca56-4406-8134-66a36999e9b5", 00:25:42.225 "strip_size_kb": 64, 00:25:42.225 "state": "online", 00:25:42.225 "raid_level": "concat", 00:25:42.225 "superblock": false, 00:25:42.225 "num_base_bdevs": 4, 00:25:42.225 "num_base_bdevs_discovered": 4, 00:25:42.225 "num_base_bdevs_operational": 4, 00:25:42.225 "base_bdevs_list": [ 00:25:42.225 { 00:25:42.225 "name": "BaseBdev1", 00:25:42.225 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:42.225 "is_configured": true, 00:25:42.225 "data_offset": 0, 00:25:42.225 "data_size": 65536 00:25:42.225 }, 00:25:42.225 { 00:25:42.225 "name": "BaseBdev2", 00:25:42.225 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:42.225 "is_configured": true, 00:25:42.225 "data_offset": 0, 00:25:42.225 "data_size": 65536 00:25:42.225 }, 00:25:42.225 { 00:25:42.225 "name": "BaseBdev3", 00:25:42.225 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:42.225 "is_configured": true, 00:25:42.225 "data_offset": 0, 00:25:42.225 "data_size": 65536 00:25:42.225 }, 00:25:42.225 { 00:25:42.225 "name": "BaseBdev4", 00:25:42.225 "uuid": "51924ac4-b635-40e5-8b47-f04fac72855c", 00:25:42.225 "is_configured": true, 00:25:42.225 "data_offset": 0, 00:25:42.225 "data_size": 65536 00:25:42.225 } 00:25:42.225 ] 00:25:42.225 }' 00:25:42.225 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.225 14:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:43.158 14:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:43.158 [2024-07-25 14:08:32.154402] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:43.158 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:43.158 "name": "Existed_Raid", 00:25:43.158 "aliases": [ 00:25:43.158 "9d085931-ca56-4406-8134-66a36999e9b5" 00:25:43.158 ], 00:25:43.158 "product_name": "Raid Volume", 00:25:43.158 "block_size": 512, 00:25:43.158 "num_blocks": 262144, 00:25:43.158 "uuid": "9d085931-ca56-4406-8134-66a36999e9b5", 00:25:43.158 "assigned_rate_limits": { 00:25:43.158 "rw_ios_per_sec": 0, 00:25:43.158 "rw_mbytes_per_sec": 0, 00:25:43.158 "r_mbytes_per_sec": 0, 00:25:43.158 "w_mbytes_per_sec": 0 00:25:43.158 }, 00:25:43.158 "claimed": false, 00:25:43.158 "zoned": false, 00:25:43.158 "supported_io_types": { 00:25:43.158 "read": true, 00:25:43.158 "write": true, 00:25:43.158 "unmap": true, 00:25:43.158 "flush": true, 00:25:43.158 "reset": true, 00:25:43.158 "nvme_admin": false, 00:25:43.158 "nvme_io": false, 00:25:43.158 "nvme_io_md": false, 00:25:43.158 "write_zeroes": true, 00:25:43.158 "zcopy": false, 00:25:43.158 "get_zone_info": false, 00:25:43.158 "zone_management": false, 00:25:43.158 "zone_append": false, 00:25:43.158 "compare": false, 00:25:43.158 "compare_and_write": false, 00:25:43.158 "abort": false, 00:25:43.158 "seek_hole": false, 00:25:43.158 "seek_data": false, 00:25:43.158 "copy": false, 00:25:43.158 "nvme_iov_md": false 00:25:43.158 }, 00:25:43.158 "memory_domains": [ 00:25:43.158 { 00:25:43.158 "dma_device_id": "system", 00:25:43.158 "dma_device_type": 1 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.158 "dma_device_type": 2 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "system", 00:25:43.158 "dma_device_type": 1 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.158 "dma_device_type": 2 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "system", 00:25:43.158 "dma_device_type": 1 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.158 "dma_device_type": 2 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "system", 00:25:43.158 "dma_device_type": 1 00:25:43.158 }, 00:25:43.158 { 00:25:43.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.158 "dma_device_type": 2 00:25:43.159 } 00:25:43.159 ], 00:25:43.159 "driver_specific": { 00:25:43.159 "raid": { 00:25:43.159 "uuid": "9d085931-ca56-4406-8134-66a36999e9b5", 00:25:43.159 "strip_size_kb": 64, 00:25:43.159 "state": "online", 00:25:43.159 "raid_level": "concat", 00:25:43.159 "superblock": false, 00:25:43.159 "num_base_bdevs": 4, 00:25:43.159 "num_base_bdevs_discovered": 4, 00:25:43.159 "num_base_bdevs_operational": 4, 00:25:43.159 "base_bdevs_list": [ 00:25:43.159 { 00:25:43.159 "name": "BaseBdev1", 00:25:43.159 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:43.159 "is_configured": true, 00:25:43.159 "data_offset": 0, 00:25:43.159 "data_size": 65536 00:25:43.159 }, 00:25:43.159 { 00:25:43.159 "name": "BaseBdev2", 00:25:43.159 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:43.159 "is_configured": true, 00:25:43.159 "data_offset": 0, 00:25:43.159 "data_size": 65536 00:25:43.159 }, 00:25:43.159 { 00:25:43.159 "name": "BaseBdev3", 00:25:43.159 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:43.159 "is_configured": true, 00:25:43.159 "data_offset": 0, 00:25:43.159 "data_size": 65536 00:25:43.159 }, 00:25:43.159 { 00:25:43.159 "name": "BaseBdev4", 00:25:43.159 "uuid": "51924ac4-b635-40e5-8b47-f04fac72855c", 00:25:43.159 "is_configured": true, 00:25:43.159 "data_offset": 0, 00:25:43.159 "data_size": 65536 00:25:43.159 } 00:25:43.159 ] 00:25:43.159 } 00:25:43.159 } 00:25:43.159 }' 00:25:43.159 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:43.416 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:43.417 BaseBdev2 00:25:43.417 BaseBdev3 00:25:43.417 BaseBdev4' 00:25:43.417 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.417 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:43.417 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:43.674 "name": "BaseBdev1", 00:25:43.674 "aliases": [ 00:25:43.674 "d13b57e9-8def-4f79-8268-7f954dbf7f68" 00:25:43.674 ], 00:25:43.674 "product_name": "Malloc disk", 00:25:43.674 "block_size": 512, 00:25:43.674 "num_blocks": 65536, 00:25:43.674 "uuid": "d13b57e9-8def-4f79-8268-7f954dbf7f68", 00:25:43.674 "assigned_rate_limits": { 00:25:43.674 "rw_ios_per_sec": 0, 00:25:43.674 "rw_mbytes_per_sec": 0, 00:25:43.674 "r_mbytes_per_sec": 0, 00:25:43.674 "w_mbytes_per_sec": 0 00:25:43.674 }, 00:25:43.674 "claimed": true, 00:25:43.674 "claim_type": "exclusive_write", 00:25:43.674 "zoned": false, 00:25:43.674 "supported_io_types": { 00:25:43.674 "read": true, 00:25:43.674 "write": true, 00:25:43.674 "unmap": true, 00:25:43.674 "flush": true, 00:25:43.674 "reset": true, 00:25:43.674 "nvme_admin": false, 00:25:43.674 "nvme_io": false, 00:25:43.674 "nvme_io_md": false, 00:25:43.674 "write_zeroes": true, 00:25:43.674 "zcopy": true, 00:25:43.674 "get_zone_info": false, 00:25:43.674 "zone_management": false, 00:25:43.674 "zone_append": false, 00:25:43.674 "compare": false, 00:25:43.674 "compare_and_write": false, 00:25:43.674 "abort": true, 00:25:43.674 "seek_hole": false, 00:25:43.674 "seek_data": false, 00:25:43.674 "copy": true, 00:25:43.674 "nvme_iov_md": false 00:25:43.674 }, 00:25:43.674 "memory_domains": [ 00:25:43.674 { 00:25:43.674 "dma_device_id": "system", 00:25:43.674 "dma_device_type": 1 00:25:43.674 }, 00:25:43.674 { 00:25:43.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.674 "dma_device_type": 2 00:25:43.674 } 00:25:43.674 ], 00:25:43.674 "driver_specific": {} 00:25:43.674 }' 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.674 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.931 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.932 14:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:44.190 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.190 "name": "BaseBdev2", 00:25:44.190 "aliases": [ 00:25:44.190 "be94652c-08c2-41a6-acb2-93eb60e51164" 00:25:44.190 ], 00:25:44.190 "product_name": "Malloc disk", 00:25:44.190 "block_size": 512, 00:25:44.190 "num_blocks": 65536, 00:25:44.190 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:44.190 "assigned_rate_limits": { 00:25:44.190 "rw_ios_per_sec": 0, 00:25:44.190 "rw_mbytes_per_sec": 0, 00:25:44.190 "r_mbytes_per_sec": 0, 00:25:44.190 "w_mbytes_per_sec": 0 00:25:44.190 }, 00:25:44.190 "claimed": true, 00:25:44.190 "claim_type": "exclusive_write", 00:25:44.190 "zoned": false, 00:25:44.190 "supported_io_types": { 00:25:44.190 "read": true, 00:25:44.190 "write": true, 00:25:44.190 "unmap": true, 00:25:44.190 "flush": true, 00:25:44.190 "reset": true, 00:25:44.190 "nvme_admin": false, 00:25:44.190 "nvme_io": false, 00:25:44.190 "nvme_io_md": false, 00:25:44.190 "write_zeroes": true, 00:25:44.190 "zcopy": true, 00:25:44.190 "get_zone_info": false, 00:25:44.190 "zone_management": false, 00:25:44.190 "zone_append": false, 00:25:44.190 "compare": false, 00:25:44.190 "compare_and_write": false, 00:25:44.190 "abort": true, 00:25:44.190 "seek_hole": false, 00:25:44.190 "seek_data": false, 00:25:44.190 "copy": true, 00:25:44.190 "nvme_iov_md": false 00:25:44.190 }, 00:25:44.190 "memory_domains": [ 00:25:44.190 { 00:25:44.190 "dma_device_id": "system", 00:25:44.190 "dma_device_type": 1 00:25:44.190 }, 00:25:44.190 { 00:25:44.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.190 "dma_device_type": 2 00:25:44.190 } 00:25:44.190 ], 00:25:44.190 "driver_specific": {} 00:25:44.190 }' 00:25:44.190 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.190 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.447 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.735 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.735 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.735 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:44.735 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.991 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.991 "name": "BaseBdev3", 00:25:44.991 "aliases": [ 00:25:44.991 "0b2daf0f-dab3-4eb3-862d-f574a757fdb2" 00:25:44.991 ], 00:25:44.991 "product_name": "Malloc disk", 00:25:44.991 "block_size": 512, 00:25:44.991 "num_blocks": 65536, 00:25:44.991 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:44.991 "assigned_rate_limits": { 00:25:44.991 "rw_ios_per_sec": 0, 00:25:44.991 "rw_mbytes_per_sec": 0, 00:25:44.991 "r_mbytes_per_sec": 0, 00:25:44.991 "w_mbytes_per_sec": 0 00:25:44.991 }, 00:25:44.991 "claimed": true, 00:25:44.991 "claim_type": "exclusive_write", 00:25:44.991 "zoned": false, 00:25:44.991 "supported_io_types": { 00:25:44.991 "read": true, 00:25:44.991 "write": true, 00:25:44.991 "unmap": true, 00:25:44.991 "flush": true, 00:25:44.991 "reset": true, 00:25:44.991 "nvme_admin": false, 00:25:44.992 "nvme_io": false, 00:25:44.992 "nvme_io_md": false, 00:25:44.992 "write_zeroes": true, 00:25:44.992 "zcopy": true, 00:25:44.992 "get_zone_info": false, 00:25:44.992 "zone_management": false, 00:25:44.992 "zone_append": false, 00:25:44.992 "compare": false, 00:25:44.992 "compare_and_write": false, 00:25:44.992 "abort": true, 00:25:44.992 "seek_hole": false, 00:25:44.992 "seek_data": false, 00:25:44.992 "copy": true, 00:25:44.992 "nvme_iov_md": false 00:25:44.992 }, 00:25:44.992 "memory_domains": [ 00:25:44.992 { 00:25:44.992 "dma_device_id": "system", 00:25:44.992 "dma_device_type": 1 00:25:44.992 }, 00:25:44.992 { 00:25:44.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.992 "dma_device_type": 2 00:25:44.992 } 00:25:44.992 ], 00:25:44.992 "driver_specific": {} 00:25:44.992 }' 00:25:44.992 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.992 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.992 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.992 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.992 14:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.992 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.992 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:45.249 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:45.506 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:45.506 "name": "BaseBdev4", 00:25:45.506 "aliases": [ 00:25:45.506 "51924ac4-b635-40e5-8b47-f04fac72855c" 00:25:45.506 ], 00:25:45.506 "product_name": "Malloc disk", 00:25:45.506 "block_size": 512, 00:25:45.506 "num_blocks": 65536, 00:25:45.506 "uuid": "51924ac4-b635-40e5-8b47-f04fac72855c", 00:25:45.506 "assigned_rate_limits": { 00:25:45.506 "rw_ios_per_sec": 0, 00:25:45.506 "rw_mbytes_per_sec": 0, 00:25:45.506 "r_mbytes_per_sec": 0, 00:25:45.506 "w_mbytes_per_sec": 0 00:25:45.506 }, 00:25:45.506 "claimed": true, 00:25:45.506 "claim_type": "exclusive_write", 00:25:45.506 "zoned": false, 00:25:45.506 "supported_io_types": { 00:25:45.506 "read": true, 00:25:45.506 "write": true, 00:25:45.506 "unmap": true, 00:25:45.506 "flush": true, 00:25:45.506 "reset": true, 00:25:45.506 "nvme_admin": false, 00:25:45.506 "nvme_io": false, 00:25:45.506 "nvme_io_md": false, 00:25:45.506 "write_zeroes": true, 00:25:45.506 "zcopy": true, 00:25:45.506 "get_zone_info": false, 00:25:45.506 "zone_management": false, 00:25:45.506 "zone_append": false, 00:25:45.506 "compare": false, 00:25:45.506 "compare_and_write": false, 00:25:45.506 "abort": true, 00:25:45.506 "seek_hole": false, 00:25:45.506 "seek_data": false, 00:25:45.506 "copy": true, 00:25:45.506 "nvme_iov_md": false 00:25:45.506 }, 00:25:45.506 "memory_domains": [ 00:25:45.506 { 00:25:45.506 "dma_device_id": "system", 00:25:45.506 "dma_device_type": 1 00:25:45.506 }, 00:25:45.506 { 00:25:45.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.506 "dma_device_type": 2 00:25:45.506 } 00:25:45.506 ], 00:25:45.506 "driver_specific": {} 00:25:45.506 }' 00:25:45.506 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.764 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:46.022 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:46.022 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.022 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:46.022 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:46.022 14:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:46.280 [2024-07-25 14:08:35.182954] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:46.280 [2024-07-25 14:08:35.183136] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:46.280 [2024-07-25 14:08:35.183297] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.280 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.559 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.559 "name": "Existed_Raid", 00:25:46.559 "uuid": "9d085931-ca56-4406-8134-66a36999e9b5", 00:25:46.559 "strip_size_kb": 64, 00:25:46.559 "state": "offline", 00:25:46.559 "raid_level": "concat", 00:25:46.559 "superblock": false, 00:25:46.559 "num_base_bdevs": 4, 00:25:46.559 "num_base_bdevs_discovered": 3, 00:25:46.559 "num_base_bdevs_operational": 3, 00:25:46.559 "base_bdevs_list": [ 00:25:46.559 { 00:25:46.559 "name": null, 00:25:46.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.559 "is_configured": false, 00:25:46.559 "data_offset": 0, 00:25:46.559 "data_size": 65536 00:25:46.559 }, 00:25:46.559 { 00:25:46.559 "name": "BaseBdev2", 00:25:46.559 "uuid": "be94652c-08c2-41a6-acb2-93eb60e51164", 00:25:46.559 "is_configured": true, 00:25:46.559 "data_offset": 0, 00:25:46.559 "data_size": 65536 00:25:46.559 }, 00:25:46.559 { 00:25:46.559 "name": "BaseBdev3", 00:25:46.559 "uuid": "0b2daf0f-dab3-4eb3-862d-f574a757fdb2", 00:25:46.559 "is_configured": true, 00:25:46.559 "data_offset": 0, 00:25:46.559 "data_size": 65536 00:25:46.559 }, 00:25:46.559 { 00:25:46.559 "name": "BaseBdev4", 00:25:46.559 "uuid": "51924ac4-b635-40e5-8b47-f04fac72855c", 00:25:46.559 "is_configured": true, 00:25:46.559 "data_offset": 0, 00:25:46.559 "data_size": 65536 00:25:46.559 } 00:25:46.559 ] 00:25:46.559 }' 00:25:46.559 14:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.559 14:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.124 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:47.124 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.124 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:47.381 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.638 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:47.638 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.638 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:47.638 [2024-07-25 14:08:36.673644] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:47.897 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:47.897 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.897 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.897 14:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:48.156 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:48.156 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:48.156 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:48.413 [2024-07-25 14:08:37.284952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:48.413 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:48.413 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:48.413 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.413 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:48.670 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:48.670 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:48.670 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:48.927 [2024-07-25 14:08:37.877508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:48.927 [2024-07-25 14:08:37.877754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:25:49.185 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:49.185 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:49.185 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.185 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:49.442 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:49.699 BaseBdev2 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:49.699 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.957 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:50.214 [ 00:25:50.214 { 00:25:50.214 "name": "BaseBdev2", 00:25:50.214 "aliases": [ 00:25:50.214 "67a8b24b-4b6d-4a52-9450-b45c347328fa" 00:25:50.214 ], 00:25:50.214 "product_name": "Malloc disk", 00:25:50.214 "block_size": 512, 00:25:50.214 "num_blocks": 65536, 00:25:50.214 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:50.214 "assigned_rate_limits": { 00:25:50.214 "rw_ios_per_sec": 0, 00:25:50.214 "rw_mbytes_per_sec": 0, 00:25:50.214 "r_mbytes_per_sec": 0, 00:25:50.214 "w_mbytes_per_sec": 0 00:25:50.214 }, 00:25:50.214 "claimed": false, 00:25:50.214 "zoned": false, 00:25:50.214 "supported_io_types": { 00:25:50.214 "read": true, 00:25:50.214 "write": true, 00:25:50.214 "unmap": true, 00:25:50.214 "flush": true, 00:25:50.214 "reset": true, 00:25:50.214 "nvme_admin": false, 00:25:50.214 "nvme_io": false, 00:25:50.214 "nvme_io_md": false, 00:25:50.214 "write_zeroes": true, 00:25:50.214 "zcopy": true, 00:25:50.214 "get_zone_info": false, 00:25:50.214 "zone_management": false, 00:25:50.214 "zone_append": false, 00:25:50.214 "compare": false, 00:25:50.214 "compare_and_write": false, 00:25:50.214 "abort": true, 00:25:50.214 "seek_hole": false, 00:25:50.214 "seek_data": false, 00:25:50.214 "copy": true, 00:25:50.214 "nvme_iov_md": false 00:25:50.214 }, 00:25:50.214 "memory_domains": [ 00:25:50.214 { 00:25:50.214 "dma_device_id": "system", 00:25:50.214 "dma_device_type": 1 00:25:50.214 }, 00:25:50.214 { 00:25:50.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.214 "dma_device_type": 2 00:25:50.214 } 00:25:50.214 ], 00:25:50.214 "driver_specific": {} 00:25:50.214 } 00:25:50.214 ] 00:25:50.214 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:50.214 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:50.214 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:50.214 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:50.472 BaseBdev3 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:50.472 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.730 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:50.987 [ 00:25:50.987 { 00:25:50.987 "name": "BaseBdev3", 00:25:50.987 "aliases": [ 00:25:50.987 "3a868e86-5c96-4bc4-b871-4adcf28cb062" 00:25:50.987 ], 00:25:50.987 "product_name": "Malloc disk", 00:25:50.987 "block_size": 512, 00:25:50.987 "num_blocks": 65536, 00:25:50.987 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:50.987 "assigned_rate_limits": { 00:25:50.987 "rw_ios_per_sec": 0, 00:25:50.987 "rw_mbytes_per_sec": 0, 00:25:50.987 "r_mbytes_per_sec": 0, 00:25:50.987 "w_mbytes_per_sec": 0 00:25:50.987 }, 00:25:50.987 "claimed": false, 00:25:50.987 "zoned": false, 00:25:50.987 "supported_io_types": { 00:25:50.987 "read": true, 00:25:50.987 "write": true, 00:25:50.987 "unmap": true, 00:25:50.987 "flush": true, 00:25:50.987 "reset": true, 00:25:50.987 "nvme_admin": false, 00:25:50.987 "nvme_io": false, 00:25:50.987 "nvme_io_md": false, 00:25:50.987 "write_zeroes": true, 00:25:50.987 "zcopy": true, 00:25:50.987 "get_zone_info": false, 00:25:50.987 "zone_management": false, 00:25:50.987 "zone_append": false, 00:25:50.987 "compare": false, 00:25:50.987 "compare_and_write": false, 00:25:50.987 "abort": true, 00:25:50.987 "seek_hole": false, 00:25:50.987 "seek_data": false, 00:25:50.987 "copy": true, 00:25:50.987 "nvme_iov_md": false 00:25:50.987 }, 00:25:50.987 "memory_domains": [ 00:25:50.987 { 00:25:50.987 "dma_device_id": "system", 00:25:50.987 "dma_device_type": 1 00:25:50.987 }, 00:25:50.987 { 00:25:50.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.987 "dma_device_type": 2 00:25:50.987 } 00:25:50.987 ], 00:25:50.987 "driver_specific": {} 00:25:50.987 } 00:25:50.987 ] 00:25:51.245 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:51.245 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:51.245 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:51.245 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:51.502 BaseBdev4 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:51.502 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:51.503 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:51.761 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:52.019 [ 00:25:52.019 { 00:25:52.019 "name": "BaseBdev4", 00:25:52.019 "aliases": [ 00:25:52.019 "c83af3a3-03fb-438c-be25-df412f2bd0f5" 00:25:52.019 ], 00:25:52.019 "product_name": "Malloc disk", 00:25:52.019 "block_size": 512, 00:25:52.019 "num_blocks": 65536, 00:25:52.019 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:52.019 "assigned_rate_limits": { 00:25:52.019 "rw_ios_per_sec": 0, 00:25:52.019 "rw_mbytes_per_sec": 0, 00:25:52.019 "r_mbytes_per_sec": 0, 00:25:52.019 "w_mbytes_per_sec": 0 00:25:52.019 }, 00:25:52.019 "claimed": false, 00:25:52.019 "zoned": false, 00:25:52.019 "supported_io_types": { 00:25:52.019 "read": true, 00:25:52.019 "write": true, 00:25:52.019 "unmap": true, 00:25:52.019 "flush": true, 00:25:52.019 "reset": true, 00:25:52.019 "nvme_admin": false, 00:25:52.019 "nvme_io": false, 00:25:52.019 "nvme_io_md": false, 00:25:52.019 "write_zeroes": true, 00:25:52.019 "zcopy": true, 00:25:52.019 "get_zone_info": false, 00:25:52.019 "zone_management": false, 00:25:52.019 "zone_append": false, 00:25:52.019 "compare": false, 00:25:52.019 "compare_and_write": false, 00:25:52.019 "abort": true, 00:25:52.019 "seek_hole": false, 00:25:52.019 "seek_data": false, 00:25:52.019 "copy": true, 00:25:52.019 "nvme_iov_md": false 00:25:52.019 }, 00:25:52.019 "memory_domains": [ 00:25:52.019 { 00:25:52.019 "dma_device_id": "system", 00:25:52.019 "dma_device_type": 1 00:25:52.019 }, 00:25:52.019 { 00:25:52.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.019 "dma_device_type": 2 00:25:52.019 } 00:25:52.019 ], 00:25:52.019 "driver_specific": {} 00:25:52.019 } 00:25:52.019 ] 00:25:52.019 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:52.019 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:52.019 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:52.019 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:52.278 [2024-07-25 14:08:41.184230] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:52.278 [2024-07-25 14:08:41.184534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:52.278 [2024-07-25 14:08:41.184669] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.278 [2024-07-25 14:08:41.187021] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:52.278 [2024-07-25 14:08:41.187241] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.278 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.537 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.537 "name": "Existed_Raid", 00:25:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.537 "strip_size_kb": 64, 00:25:52.537 "state": "configuring", 00:25:52.537 "raid_level": "concat", 00:25:52.537 "superblock": false, 00:25:52.537 "num_base_bdevs": 4, 00:25:52.537 "num_base_bdevs_discovered": 3, 00:25:52.537 "num_base_bdevs_operational": 4, 00:25:52.537 "base_bdevs_list": [ 00:25:52.537 { 00:25:52.537 "name": "BaseBdev1", 00:25:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.537 "is_configured": false, 00:25:52.537 "data_offset": 0, 00:25:52.537 "data_size": 0 00:25:52.537 }, 00:25:52.537 { 00:25:52.537 "name": "BaseBdev2", 00:25:52.537 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:52.537 "is_configured": true, 00:25:52.537 "data_offset": 0, 00:25:52.537 "data_size": 65536 00:25:52.537 }, 00:25:52.537 { 00:25:52.537 "name": "BaseBdev3", 00:25:52.537 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:52.537 "is_configured": true, 00:25:52.537 "data_offset": 0, 00:25:52.537 "data_size": 65536 00:25:52.537 }, 00:25:52.537 { 00:25:52.537 "name": "BaseBdev4", 00:25:52.537 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:52.537 "is_configured": true, 00:25:52.537 "data_offset": 0, 00:25:52.537 "data_size": 65536 00:25:52.537 } 00:25:52.537 ] 00:25:52.537 }' 00:25:52.537 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.537 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.102 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:53.359 [2024-07-25 14:08:42.388498] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.617 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.876 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.876 "name": "Existed_Raid", 00:25:53.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.876 "strip_size_kb": 64, 00:25:53.876 "state": "configuring", 00:25:53.876 "raid_level": "concat", 00:25:53.876 "superblock": false, 00:25:53.876 "num_base_bdevs": 4, 00:25:53.876 "num_base_bdevs_discovered": 2, 00:25:53.876 "num_base_bdevs_operational": 4, 00:25:53.876 "base_bdevs_list": [ 00:25:53.876 { 00:25:53.876 "name": "BaseBdev1", 00:25:53.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.876 "is_configured": false, 00:25:53.876 "data_offset": 0, 00:25:53.876 "data_size": 0 00:25:53.876 }, 00:25:53.876 { 00:25:53.876 "name": null, 00:25:53.876 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:53.876 "is_configured": false, 00:25:53.876 "data_offset": 0, 00:25:53.876 "data_size": 65536 00:25:53.876 }, 00:25:53.876 { 00:25:53.876 "name": "BaseBdev3", 00:25:53.876 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:53.876 "is_configured": true, 00:25:53.876 "data_offset": 0, 00:25:53.876 "data_size": 65536 00:25:53.876 }, 00:25:53.876 { 00:25:53.876 "name": "BaseBdev4", 00:25:53.876 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:53.876 "is_configured": true, 00:25:53.876 "data_offset": 0, 00:25:53.876 "data_size": 65536 00:25:53.876 } 00:25:53.876 ] 00:25:53.876 }' 00:25:53.876 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.876 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.441 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:54.441 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.698 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:54.698 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:54.956 [2024-07-25 14:08:43.915558] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.956 BaseBdev1 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:54.956 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.213 14:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.779 [ 00:25:55.779 { 00:25:55.779 "name": "BaseBdev1", 00:25:55.780 "aliases": [ 00:25:55.780 "ba191b91-7201-494f-bc4a-72ee7db09927" 00:25:55.780 ], 00:25:55.780 "product_name": "Malloc disk", 00:25:55.780 "block_size": 512, 00:25:55.780 "num_blocks": 65536, 00:25:55.780 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:25:55.780 "assigned_rate_limits": { 00:25:55.780 "rw_ios_per_sec": 0, 00:25:55.780 "rw_mbytes_per_sec": 0, 00:25:55.780 "r_mbytes_per_sec": 0, 00:25:55.780 "w_mbytes_per_sec": 0 00:25:55.780 }, 00:25:55.780 "claimed": true, 00:25:55.780 "claim_type": "exclusive_write", 00:25:55.780 "zoned": false, 00:25:55.780 "supported_io_types": { 00:25:55.780 "read": true, 00:25:55.780 "write": true, 00:25:55.780 "unmap": true, 00:25:55.780 "flush": true, 00:25:55.780 "reset": true, 00:25:55.780 "nvme_admin": false, 00:25:55.780 "nvme_io": false, 00:25:55.780 "nvme_io_md": false, 00:25:55.780 "write_zeroes": true, 00:25:55.780 "zcopy": true, 00:25:55.780 "get_zone_info": false, 00:25:55.780 "zone_management": false, 00:25:55.780 "zone_append": false, 00:25:55.780 "compare": false, 00:25:55.780 "compare_and_write": false, 00:25:55.780 "abort": true, 00:25:55.780 "seek_hole": false, 00:25:55.780 "seek_data": false, 00:25:55.780 "copy": true, 00:25:55.780 "nvme_iov_md": false 00:25:55.780 }, 00:25:55.780 "memory_domains": [ 00:25:55.780 { 00:25:55.780 "dma_device_id": "system", 00:25:55.780 "dma_device_type": 1 00:25:55.780 }, 00:25:55.780 { 00:25:55.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.780 "dma_device_type": 2 00:25:55.780 } 00:25:55.780 ], 00:25:55.780 "driver_specific": {} 00:25:55.780 } 00:25:55.780 ] 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.780 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.077 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.077 "name": "Existed_Raid", 00:25:56.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.077 "strip_size_kb": 64, 00:25:56.077 "state": "configuring", 00:25:56.077 "raid_level": "concat", 00:25:56.077 "superblock": false, 00:25:56.077 "num_base_bdevs": 4, 00:25:56.077 "num_base_bdevs_discovered": 3, 00:25:56.077 "num_base_bdevs_operational": 4, 00:25:56.077 "base_bdevs_list": [ 00:25:56.077 { 00:25:56.077 "name": "BaseBdev1", 00:25:56.077 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:25:56.077 "is_configured": true, 00:25:56.077 "data_offset": 0, 00:25:56.077 "data_size": 65536 00:25:56.078 }, 00:25:56.078 { 00:25:56.078 "name": null, 00:25:56.078 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:56.078 "is_configured": false, 00:25:56.078 "data_offset": 0, 00:25:56.078 "data_size": 65536 00:25:56.078 }, 00:25:56.078 { 00:25:56.078 "name": "BaseBdev3", 00:25:56.078 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:56.078 "is_configured": true, 00:25:56.078 "data_offset": 0, 00:25:56.078 "data_size": 65536 00:25:56.078 }, 00:25:56.078 { 00:25:56.078 "name": "BaseBdev4", 00:25:56.078 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:56.078 "is_configured": true, 00:25:56.078 "data_offset": 0, 00:25:56.078 "data_size": 65536 00:25:56.078 } 00:25:56.078 ] 00:25:56.078 }' 00:25:56.078 14:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.078 14:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.644 14:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.644 14:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:56.901 14:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:56.901 14:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:57.159 [2024-07-25 14:08:46.044158] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.159 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.417 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.417 "name": "Existed_Raid", 00:25:57.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.417 "strip_size_kb": 64, 00:25:57.417 "state": "configuring", 00:25:57.417 "raid_level": "concat", 00:25:57.417 "superblock": false, 00:25:57.417 "num_base_bdevs": 4, 00:25:57.417 "num_base_bdevs_discovered": 2, 00:25:57.417 "num_base_bdevs_operational": 4, 00:25:57.417 "base_bdevs_list": [ 00:25:57.417 { 00:25:57.417 "name": "BaseBdev1", 00:25:57.417 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:25:57.417 "is_configured": true, 00:25:57.417 "data_offset": 0, 00:25:57.417 "data_size": 65536 00:25:57.417 }, 00:25:57.417 { 00:25:57.417 "name": null, 00:25:57.417 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:57.417 "is_configured": false, 00:25:57.417 "data_offset": 0, 00:25:57.417 "data_size": 65536 00:25:57.417 }, 00:25:57.417 { 00:25:57.417 "name": null, 00:25:57.417 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:57.417 "is_configured": false, 00:25:57.417 "data_offset": 0, 00:25:57.417 "data_size": 65536 00:25:57.417 }, 00:25:57.417 { 00:25:57.417 "name": "BaseBdev4", 00:25:57.417 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:57.417 "is_configured": true, 00:25:57.417 "data_offset": 0, 00:25:57.417 "data_size": 65536 00:25:57.417 } 00:25:57.417 ] 00:25:57.417 }' 00:25:57.417 14:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.417 14:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.979 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.979 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:58.236 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:58.236 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:58.494 [2024-07-25 14:08:47.524573] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.751 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.009 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.009 "name": "Existed_Raid", 00:25:59.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.009 "strip_size_kb": 64, 00:25:59.009 "state": "configuring", 00:25:59.009 "raid_level": "concat", 00:25:59.009 "superblock": false, 00:25:59.009 "num_base_bdevs": 4, 00:25:59.009 "num_base_bdevs_discovered": 3, 00:25:59.009 "num_base_bdevs_operational": 4, 00:25:59.009 "base_bdevs_list": [ 00:25:59.009 { 00:25:59.009 "name": "BaseBdev1", 00:25:59.009 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:25:59.009 "is_configured": true, 00:25:59.009 "data_offset": 0, 00:25:59.009 "data_size": 65536 00:25:59.009 }, 00:25:59.009 { 00:25:59.009 "name": null, 00:25:59.009 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:25:59.009 "is_configured": false, 00:25:59.009 "data_offset": 0, 00:25:59.009 "data_size": 65536 00:25:59.009 }, 00:25:59.009 { 00:25:59.009 "name": "BaseBdev3", 00:25:59.009 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:25:59.009 "is_configured": true, 00:25:59.009 "data_offset": 0, 00:25:59.009 "data_size": 65536 00:25:59.009 }, 00:25:59.009 { 00:25:59.009 "name": "BaseBdev4", 00:25:59.009 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:25:59.009 "is_configured": true, 00:25:59.009 "data_offset": 0, 00:25:59.009 "data_size": 65536 00:25:59.009 } 00:25:59.009 ] 00:25:59.009 }' 00:25:59.009 14:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.009 14:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.576 14:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.576 14:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:59.834 14:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:59.835 14:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:00.093 [2024-07-25 14:08:49.012961] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.093 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.660 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.660 "name": "Existed_Raid", 00:26:00.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.660 "strip_size_kb": 64, 00:26:00.660 "state": "configuring", 00:26:00.660 "raid_level": "concat", 00:26:00.660 "superblock": false, 00:26:00.660 "num_base_bdevs": 4, 00:26:00.660 "num_base_bdevs_discovered": 2, 00:26:00.660 "num_base_bdevs_operational": 4, 00:26:00.660 "base_bdevs_list": [ 00:26:00.660 { 00:26:00.660 "name": null, 00:26:00.660 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:00.660 "is_configured": false, 00:26:00.660 "data_offset": 0, 00:26:00.660 "data_size": 65536 00:26:00.660 }, 00:26:00.660 { 00:26:00.660 "name": null, 00:26:00.660 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:26:00.660 "is_configured": false, 00:26:00.660 "data_offset": 0, 00:26:00.660 "data_size": 65536 00:26:00.660 }, 00:26:00.660 { 00:26:00.660 "name": "BaseBdev3", 00:26:00.660 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:26:00.660 "is_configured": true, 00:26:00.660 "data_offset": 0, 00:26:00.660 "data_size": 65536 00:26:00.660 }, 00:26:00.660 { 00:26:00.660 "name": "BaseBdev4", 00:26:00.660 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:26:00.660 "is_configured": true, 00:26:00.660 "data_offset": 0, 00:26:00.660 "data_size": 65536 00:26:00.660 } 00:26:00.660 ] 00:26:00.660 }' 00:26:00.660 14:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.660 14:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.226 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.226 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:01.484 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:01.484 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:01.743 [2024-07-25 14:08:50.560206] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.743 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.001 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.001 "name": "Existed_Raid", 00:26:02.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.001 "strip_size_kb": 64, 00:26:02.001 "state": "configuring", 00:26:02.001 "raid_level": "concat", 00:26:02.001 "superblock": false, 00:26:02.001 "num_base_bdevs": 4, 00:26:02.001 "num_base_bdevs_discovered": 3, 00:26:02.001 "num_base_bdevs_operational": 4, 00:26:02.001 "base_bdevs_list": [ 00:26:02.001 { 00:26:02.001 "name": null, 00:26:02.001 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:02.001 "is_configured": false, 00:26:02.001 "data_offset": 0, 00:26:02.001 "data_size": 65536 00:26:02.001 }, 00:26:02.001 { 00:26:02.001 "name": "BaseBdev2", 00:26:02.001 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:26:02.001 "is_configured": true, 00:26:02.001 "data_offset": 0, 00:26:02.001 "data_size": 65536 00:26:02.001 }, 00:26:02.001 { 00:26:02.001 "name": "BaseBdev3", 00:26:02.001 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:26:02.001 "is_configured": true, 00:26:02.001 "data_offset": 0, 00:26:02.001 "data_size": 65536 00:26:02.001 }, 00:26:02.001 { 00:26:02.001 "name": "BaseBdev4", 00:26:02.001 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:26:02.001 "is_configured": true, 00:26:02.001 "data_offset": 0, 00:26:02.001 "data_size": 65536 00:26:02.001 } 00:26:02.001 ] 00:26:02.001 }' 00:26:02.001 14:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.001 14:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.567 14:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.567 14:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:02.825 14:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:02.825 14:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:02.825 14:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.083 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ba191b91-7201-494f-bc4a-72ee7db09927 00:26:03.341 [2024-07-25 14:08:52.276611] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:03.341 [2024-07-25 14:08:52.276980] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:26:03.341 [2024-07-25 14:08:52.277024] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:03.341 [2024-07-25 14:08:52.277256] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:03.341 [2024-07-25 14:08:52.277761] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:26:03.341 [2024-07-25 14:08:52.277964] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:26:03.341 [2024-07-25 14:08:52.278335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.341 NewBaseBdev 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:03.341 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:03.599 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:03.857 [ 00:26:03.857 { 00:26:03.857 "name": "NewBaseBdev", 00:26:03.857 "aliases": [ 00:26:03.857 "ba191b91-7201-494f-bc4a-72ee7db09927" 00:26:03.857 ], 00:26:03.857 "product_name": "Malloc disk", 00:26:03.857 "block_size": 512, 00:26:03.857 "num_blocks": 65536, 00:26:03.857 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:03.857 "assigned_rate_limits": { 00:26:03.857 "rw_ios_per_sec": 0, 00:26:03.857 "rw_mbytes_per_sec": 0, 00:26:03.857 "r_mbytes_per_sec": 0, 00:26:03.857 "w_mbytes_per_sec": 0 00:26:03.857 }, 00:26:03.857 "claimed": true, 00:26:03.857 "claim_type": "exclusive_write", 00:26:03.857 "zoned": false, 00:26:03.857 "supported_io_types": { 00:26:03.857 "read": true, 00:26:03.857 "write": true, 00:26:03.857 "unmap": true, 00:26:03.857 "flush": true, 00:26:03.857 "reset": true, 00:26:03.857 "nvme_admin": false, 00:26:03.857 "nvme_io": false, 00:26:03.857 "nvme_io_md": false, 00:26:03.857 "write_zeroes": true, 00:26:03.857 "zcopy": true, 00:26:03.857 "get_zone_info": false, 00:26:03.857 "zone_management": false, 00:26:03.857 "zone_append": false, 00:26:03.857 "compare": false, 00:26:03.857 "compare_and_write": false, 00:26:03.857 "abort": true, 00:26:03.857 "seek_hole": false, 00:26:03.857 "seek_data": false, 00:26:03.857 "copy": true, 00:26:03.857 "nvme_iov_md": false 00:26:03.857 }, 00:26:03.857 "memory_domains": [ 00:26:03.857 { 00:26:03.857 "dma_device_id": "system", 00:26:03.857 "dma_device_type": 1 00:26:03.857 }, 00:26:03.857 { 00:26:03.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.857 "dma_device_type": 2 00:26:03.857 } 00:26:03.857 ], 00:26:03.857 "driver_specific": {} 00:26:03.857 } 00:26:03.857 ] 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.857 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.116 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.116 "name": "Existed_Raid", 00:26:04.116 "uuid": "ff6592f5-85f6-4ac0-880d-8b18f18e1148", 00:26:04.116 "strip_size_kb": 64, 00:26:04.116 "state": "online", 00:26:04.116 "raid_level": "concat", 00:26:04.116 "superblock": false, 00:26:04.116 "num_base_bdevs": 4, 00:26:04.116 "num_base_bdevs_discovered": 4, 00:26:04.116 "num_base_bdevs_operational": 4, 00:26:04.116 "base_bdevs_list": [ 00:26:04.116 { 00:26:04.116 "name": "NewBaseBdev", 00:26:04.116 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:04.116 "is_configured": true, 00:26:04.116 "data_offset": 0, 00:26:04.116 "data_size": 65536 00:26:04.116 }, 00:26:04.116 { 00:26:04.116 "name": "BaseBdev2", 00:26:04.116 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:26:04.116 "is_configured": true, 00:26:04.116 "data_offset": 0, 00:26:04.116 "data_size": 65536 00:26:04.116 }, 00:26:04.116 { 00:26:04.116 "name": "BaseBdev3", 00:26:04.116 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:26:04.116 "is_configured": true, 00:26:04.116 "data_offset": 0, 00:26:04.116 "data_size": 65536 00:26:04.116 }, 00:26:04.116 { 00:26:04.116 "name": "BaseBdev4", 00:26:04.116 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:26:04.116 "is_configured": true, 00:26:04.116 "data_offset": 0, 00:26:04.116 "data_size": 65536 00:26:04.116 } 00:26:04.116 ] 00:26:04.116 }' 00:26:04.116 14:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.116 14:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:04.714 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:04.972 [2024-07-25 14:08:53.977515] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.972 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:04.972 "name": "Existed_Raid", 00:26:04.972 "aliases": [ 00:26:04.972 "ff6592f5-85f6-4ac0-880d-8b18f18e1148" 00:26:04.972 ], 00:26:04.972 "product_name": "Raid Volume", 00:26:04.972 "block_size": 512, 00:26:04.972 "num_blocks": 262144, 00:26:04.972 "uuid": "ff6592f5-85f6-4ac0-880d-8b18f18e1148", 00:26:04.972 "assigned_rate_limits": { 00:26:04.972 "rw_ios_per_sec": 0, 00:26:04.972 "rw_mbytes_per_sec": 0, 00:26:04.972 "r_mbytes_per_sec": 0, 00:26:04.972 "w_mbytes_per_sec": 0 00:26:04.972 }, 00:26:04.972 "claimed": false, 00:26:04.972 "zoned": false, 00:26:04.972 "supported_io_types": { 00:26:04.972 "read": true, 00:26:04.972 "write": true, 00:26:04.972 "unmap": true, 00:26:04.972 "flush": true, 00:26:04.972 "reset": true, 00:26:04.972 "nvme_admin": false, 00:26:04.972 "nvme_io": false, 00:26:04.972 "nvme_io_md": false, 00:26:04.972 "write_zeroes": true, 00:26:04.972 "zcopy": false, 00:26:04.972 "get_zone_info": false, 00:26:04.972 "zone_management": false, 00:26:04.972 "zone_append": false, 00:26:04.972 "compare": false, 00:26:04.972 "compare_and_write": false, 00:26:04.972 "abort": false, 00:26:04.972 "seek_hole": false, 00:26:04.972 "seek_data": false, 00:26:04.972 "copy": false, 00:26:04.972 "nvme_iov_md": false 00:26:04.972 }, 00:26:04.972 "memory_domains": [ 00:26:04.972 { 00:26:04.972 "dma_device_id": "system", 00:26:04.972 "dma_device_type": 1 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.972 "dma_device_type": 2 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "system", 00:26:04.972 "dma_device_type": 1 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.972 "dma_device_type": 2 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "system", 00:26:04.972 "dma_device_type": 1 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.972 "dma_device_type": 2 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "system", 00:26:04.972 "dma_device_type": 1 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.972 "dma_device_type": 2 00:26:04.972 } 00:26:04.972 ], 00:26:04.972 "driver_specific": { 00:26:04.972 "raid": { 00:26:04.972 "uuid": "ff6592f5-85f6-4ac0-880d-8b18f18e1148", 00:26:04.972 "strip_size_kb": 64, 00:26:04.972 "state": "online", 00:26:04.972 "raid_level": "concat", 00:26:04.972 "superblock": false, 00:26:04.972 "num_base_bdevs": 4, 00:26:04.972 "num_base_bdevs_discovered": 4, 00:26:04.972 "num_base_bdevs_operational": 4, 00:26:04.972 "base_bdevs_list": [ 00:26:04.972 { 00:26:04.972 "name": "NewBaseBdev", 00:26:04.972 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:04.972 "is_configured": true, 00:26:04.972 "data_offset": 0, 00:26:04.972 "data_size": 65536 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "name": "BaseBdev2", 00:26:04.972 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:26:04.972 "is_configured": true, 00:26:04.972 "data_offset": 0, 00:26:04.972 "data_size": 65536 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "name": "BaseBdev3", 00:26:04.972 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:26:04.972 "is_configured": true, 00:26:04.972 "data_offset": 0, 00:26:04.972 "data_size": 65536 00:26:04.972 }, 00:26:04.972 { 00:26:04.972 "name": "BaseBdev4", 00:26:04.972 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:26:04.972 "is_configured": true, 00:26:04.972 "data_offset": 0, 00:26:04.972 "data_size": 65536 00:26:04.972 } 00:26:04.972 ] 00:26:04.972 } 00:26:04.972 } 00:26:04.972 }' 00:26:04.972 14:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:05.230 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:05.230 BaseBdev2 00:26:05.230 BaseBdev3 00:26:05.230 BaseBdev4' 00:26:05.230 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.230 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:05.230 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:05.488 "name": "NewBaseBdev", 00:26:05.488 "aliases": [ 00:26:05.488 "ba191b91-7201-494f-bc4a-72ee7db09927" 00:26:05.488 ], 00:26:05.488 "product_name": "Malloc disk", 00:26:05.488 "block_size": 512, 00:26:05.488 "num_blocks": 65536, 00:26:05.488 "uuid": "ba191b91-7201-494f-bc4a-72ee7db09927", 00:26:05.488 "assigned_rate_limits": { 00:26:05.488 "rw_ios_per_sec": 0, 00:26:05.488 "rw_mbytes_per_sec": 0, 00:26:05.488 "r_mbytes_per_sec": 0, 00:26:05.488 "w_mbytes_per_sec": 0 00:26:05.488 }, 00:26:05.488 "claimed": true, 00:26:05.488 "claim_type": "exclusive_write", 00:26:05.488 "zoned": false, 00:26:05.488 "supported_io_types": { 00:26:05.488 "read": true, 00:26:05.488 "write": true, 00:26:05.488 "unmap": true, 00:26:05.488 "flush": true, 00:26:05.488 "reset": true, 00:26:05.488 "nvme_admin": false, 00:26:05.488 "nvme_io": false, 00:26:05.488 "nvme_io_md": false, 00:26:05.488 "write_zeroes": true, 00:26:05.488 "zcopy": true, 00:26:05.488 "get_zone_info": false, 00:26:05.488 "zone_management": false, 00:26:05.488 "zone_append": false, 00:26:05.488 "compare": false, 00:26:05.488 "compare_and_write": false, 00:26:05.488 "abort": true, 00:26:05.488 "seek_hole": false, 00:26:05.488 "seek_data": false, 00:26:05.488 "copy": true, 00:26:05.488 "nvme_iov_md": false 00:26:05.488 }, 00:26:05.488 "memory_domains": [ 00:26:05.488 { 00:26:05.488 "dma_device_id": "system", 00:26:05.488 "dma_device_type": 1 00:26:05.488 }, 00:26:05.488 { 00:26:05.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.488 "dma_device_type": 2 00:26:05.488 } 00:26:05.488 ], 00:26:05.488 "driver_specific": {} 00:26:05.488 }' 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:05.488 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:05.746 14:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:06.004 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:06.004 "name": "BaseBdev2", 00:26:06.004 "aliases": [ 00:26:06.004 "67a8b24b-4b6d-4a52-9450-b45c347328fa" 00:26:06.004 ], 00:26:06.004 "product_name": "Malloc disk", 00:26:06.004 "block_size": 512, 00:26:06.004 "num_blocks": 65536, 00:26:06.004 "uuid": "67a8b24b-4b6d-4a52-9450-b45c347328fa", 00:26:06.004 "assigned_rate_limits": { 00:26:06.004 "rw_ios_per_sec": 0, 00:26:06.004 "rw_mbytes_per_sec": 0, 00:26:06.004 "r_mbytes_per_sec": 0, 00:26:06.004 "w_mbytes_per_sec": 0 00:26:06.004 }, 00:26:06.004 "claimed": true, 00:26:06.004 "claim_type": "exclusive_write", 00:26:06.004 "zoned": false, 00:26:06.004 "supported_io_types": { 00:26:06.004 "read": true, 00:26:06.004 "write": true, 00:26:06.004 "unmap": true, 00:26:06.004 "flush": true, 00:26:06.004 "reset": true, 00:26:06.004 "nvme_admin": false, 00:26:06.004 "nvme_io": false, 00:26:06.004 "nvme_io_md": false, 00:26:06.004 "write_zeroes": true, 00:26:06.004 "zcopy": true, 00:26:06.004 "get_zone_info": false, 00:26:06.004 "zone_management": false, 00:26:06.004 "zone_append": false, 00:26:06.004 "compare": false, 00:26:06.004 "compare_and_write": false, 00:26:06.004 "abort": true, 00:26:06.004 "seek_hole": false, 00:26:06.004 "seek_data": false, 00:26:06.004 "copy": true, 00:26:06.004 "nvme_iov_md": false 00:26:06.004 }, 00:26:06.004 "memory_domains": [ 00:26:06.004 { 00:26:06.004 "dma_device_id": "system", 00:26:06.004 "dma_device_type": 1 00:26:06.004 }, 00:26:06.004 { 00:26:06.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.004 "dma_device_type": 2 00:26:06.004 } 00:26:06.004 ], 00:26:06.004 "driver_specific": {} 00:26:06.004 }' 00:26:06.004 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.262 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:06.520 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:06.778 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:06.778 "name": "BaseBdev3", 00:26:06.778 "aliases": [ 00:26:06.778 "3a868e86-5c96-4bc4-b871-4adcf28cb062" 00:26:06.778 ], 00:26:06.778 "product_name": "Malloc disk", 00:26:06.778 "block_size": 512, 00:26:06.778 "num_blocks": 65536, 00:26:06.778 "uuid": "3a868e86-5c96-4bc4-b871-4adcf28cb062", 00:26:06.778 "assigned_rate_limits": { 00:26:06.778 "rw_ios_per_sec": 0, 00:26:06.778 "rw_mbytes_per_sec": 0, 00:26:06.778 "r_mbytes_per_sec": 0, 00:26:06.778 "w_mbytes_per_sec": 0 00:26:06.778 }, 00:26:06.778 "claimed": true, 00:26:06.778 "claim_type": "exclusive_write", 00:26:06.778 "zoned": false, 00:26:06.778 "supported_io_types": { 00:26:06.778 "read": true, 00:26:06.778 "write": true, 00:26:06.778 "unmap": true, 00:26:06.778 "flush": true, 00:26:06.778 "reset": true, 00:26:06.778 "nvme_admin": false, 00:26:06.778 "nvme_io": false, 00:26:06.778 "nvme_io_md": false, 00:26:06.778 "write_zeroes": true, 00:26:06.778 "zcopy": true, 00:26:06.778 "get_zone_info": false, 00:26:06.778 "zone_management": false, 00:26:06.778 "zone_append": false, 00:26:06.778 "compare": false, 00:26:06.778 "compare_and_write": false, 00:26:06.778 "abort": true, 00:26:06.778 "seek_hole": false, 00:26:06.778 "seek_data": false, 00:26:06.778 "copy": true, 00:26:06.778 "nvme_iov_md": false 00:26:06.778 }, 00:26:06.778 "memory_domains": [ 00:26:06.778 { 00:26:06.778 "dma_device_id": "system", 00:26:06.778 "dma_device_type": 1 00:26:06.778 }, 00:26:06.778 { 00:26:06.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.778 "dma_device_type": 2 00:26:06.778 } 00:26:06.778 ], 00:26:06.778 "driver_specific": {} 00:26:06.778 }' 00:26:06.778 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.778 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.778 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:06.778 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:07.036 14:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.036 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.295 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:07.295 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.295 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:07.295 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:07.553 "name": "BaseBdev4", 00:26:07.553 "aliases": [ 00:26:07.553 "c83af3a3-03fb-438c-be25-df412f2bd0f5" 00:26:07.553 ], 00:26:07.553 "product_name": "Malloc disk", 00:26:07.553 "block_size": 512, 00:26:07.553 "num_blocks": 65536, 00:26:07.553 "uuid": "c83af3a3-03fb-438c-be25-df412f2bd0f5", 00:26:07.553 "assigned_rate_limits": { 00:26:07.553 "rw_ios_per_sec": 0, 00:26:07.553 "rw_mbytes_per_sec": 0, 00:26:07.553 "r_mbytes_per_sec": 0, 00:26:07.553 "w_mbytes_per_sec": 0 00:26:07.553 }, 00:26:07.553 "claimed": true, 00:26:07.553 "claim_type": "exclusive_write", 00:26:07.553 "zoned": false, 00:26:07.553 "supported_io_types": { 00:26:07.553 "read": true, 00:26:07.553 "write": true, 00:26:07.553 "unmap": true, 00:26:07.553 "flush": true, 00:26:07.553 "reset": true, 00:26:07.553 "nvme_admin": false, 00:26:07.553 "nvme_io": false, 00:26:07.553 "nvme_io_md": false, 00:26:07.553 "write_zeroes": true, 00:26:07.553 "zcopy": true, 00:26:07.553 "get_zone_info": false, 00:26:07.553 "zone_management": false, 00:26:07.553 "zone_append": false, 00:26:07.553 "compare": false, 00:26:07.553 "compare_and_write": false, 00:26:07.553 "abort": true, 00:26:07.553 "seek_hole": false, 00:26:07.553 "seek_data": false, 00:26:07.553 "copy": true, 00:26:07.553 "nvme_iov_md": false 00:26:07.553 }, 00:26:07.553 "memory_domains": [ 00:26:07.553 { 00:26:07.553 "dma_device_id": "system", 00:26:07.553 "dma_device_type": 1 00:26:07.553 }, 00:26:07.553 { 00:26:07.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.553 "dma_device_type": 2 00:26:07.553 } 00:26:07.553 ], 00:26:07.553 "driver_specific": {} 00:26:07.553 }' 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:07.553 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:07.811 14:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:08.069 [2024-07-25 14:08:57.078148] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:08.069 [2024-07-25 14:08:57.078398] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.069 [2024-07-25 14:08:57.078577] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.069 [2024-07-25 14:08:57.078775] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.070 [2024-07-25 14:08:57.078914] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 137439 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 137439 ']' 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 137439 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.070 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137439 00:26:08.342 killing process with pid 137439 00:26:08.342 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.343 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.343 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137439' 00:26:08.343 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 137439 00:26:08.343 [2024-07-25 14:08:57.122729] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:08.343 14:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 137439 00:26:08.600 [2024-07-25 14:08:57.450495] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.533 ************************************ 00:26:09.533 END TEST raid_state_function_test 00:26:09.533 ************************************ 00:26:09.533 14:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:09.533 00:26:09.533 real 0m38.067s 00:26:09.533 user 1m10.966s 00:26:09.533 sys 0m4.335s 00:26:09.533 14:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.533 14:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.790 14:08:58 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:26:09.790 14:08:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:09.790 14:08:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.790 14:08:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.790 ************************************ 00:26:09.790 START TEST raid_state_function_test_sb 00:26:09.791 ************************************ 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=138586 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138586' 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:09.791 Process raid pid: 138586 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 138586 /var/tmp/spdk-raid.sock 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 138586 ']' 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:09.791 14:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.791 [2024-07-25 14:08:58.687979] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:26:09.791 [2024-07-25 14:08:58.688452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.049 [2024-07-25 14:08:58.862493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.306 [2024-07-25 14:08:59.110803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.306 [2024-07-25 14:08:59.301543] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.873 14:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.873 14:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:26:10.873 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:11.131 [2024-07-25 14:08:59.921014] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.131 [2024-07-25 14:08:59.921410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.131 [2024-07-25 14:08:59.921557] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.131 [2024-07-25 14:08:59.921626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.131 [2024-07-25 14:08:59.921731] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.131 [2024-07-25 14:08:59.921806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.131 [2024-07-25 14:08:59.921910] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:11.131 [2024-07-25 14:08:59.921980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.131 14:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.131 14:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.131 "name": "Existed_Raid", 00:26:11.131 "uuid": "180de86e-1b77-418e-a18c-378241e447c5", 00:26:11.131 "strip_size_kb": 64, 00:26:11.131 "state": "configuring", 00:26:11.131 "raid_level": "concat", 00:26:11.131 "superblock": true, 00:26:11.131 "num_base_bdevs": 4, 00:26:11.131 "num_base_bdevs_discovered": 0, 00:26:11.131 "num_base_bdevs_operational": 4, 00:26:11.131 "base_bdevs_list": [ 00:26:11.131 { 00:26:11.131 "name": "BaseBdev1", 00:26:11.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.131 "is_configured": false, 00:26:11.131 "data_offset": 0, 00:26:11.131 "data_size": 0 00:26:11.131 }, 00:26:11.131 { 00:26:11.131 "name": "BaseBdev2", 00:26:11.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.131 "is_configured": false, 00:26:11.131 "data_offset": 0, 00:26:11.131 "data_size": 0 00:26:11.131 }, 00:26:11.131 { 00:26:11.131 "name": "BaseBdev3", 00:26:11.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.131 "is_configured": false, 00:26:11.131 "data_offset": 0, 00:26:11.131 "data_size": 0 00:26:11.132 }, 00:26:11.132 { 00:26:11.132 "name": "BaseBdev4", 00:26:11.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.132 "is_configured": false, 00:26:11.132 "data_offset": 0, 00:26:11.132 "data_size": 0 00:26:11.132 } 00:26:11.132 ] 00:26:11.132 }' 00:26:11.132 14:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.132 14:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.067 14:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:12.067 [2024-07-25 14:09:01.053163] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:12.067 [2024-07-25 14:09:01.053425] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:26:12.067 14:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:12.325 [2024-07-25 14:09:01.329237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:12.325 [2024-07-25 14:09:01.329492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:12.325 [2024-07-25 14:09:01.329603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:12.325 [2024-07-25 14:09:01.329692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:12.325 [2024-07-25 14:09:01.329832] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:12.325 [2024-07-25 14:09:01.330023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:12.325 [2024-07-25 14:09:01.330126] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:12.325 [2024-07-25 14:09:01.330192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:12.325 14:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:12.625 [2024-07-25 14:09:01.587162] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.625 BaseBdev1 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:12.625 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.883 14:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:13.141 [ 00:26:13.141 { 00:26:13.141 "name": "BaseBdev1", 00:26:13.141 "aliases": [ 00:26:13.141 "3e95c590-7122-424d-86a1-f3d7241b4fbf" 00:26:13.141 ], 00:26:13.141 "product_name": "Malloc disk", 00:26:13.141 "block_size": 512, 00:26:13.141 "num_blocks": 65536, 00:26:13.141 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:13.141 "assigned_rate_limits": { 00:26:13.141 "rw_ios_per_sec": 0, 00:26:13.141 "rw_mbytes_per_sec": 0, 00:26:13.141 "r_mbytes_per_sec": 0, 00:26:13.141 "w_mbytes_per_sec": 0 00:26:13.141 }, 00:26:13.141 "claimed": true, 00:26:13.141 "claim_type": "exclusive_write", 00:26:13.141 "zoned": false, 00:26:13.141 "supported_io_types": { 00:26:13.141 "read": true, 00:26:13.141 "write": true, 00:26:13.141 "unmap": true, 00:26:13.141 "flush": true, 00:26:13.141 "reset": true, 00:26:13.141 "nvme_admin": false, 00:26:13.141 "nvme_io": false, 00:26:13.141 "nvme_io_md": false, 00:26:13.141 "write_zeroes": true, 00:26:13.141 "zcopy": true, 00:26:13.141 "get_zone_info": false, 00:26:13.141 "zone_management": false, 00:26:13.141 "zone_append": false, 00:26:13.141 "compare": false, 00:26:13.141 "compare_and_write": false, 00:26:13.141 "abort": true, 00:26:13.141 "seek_hole": false, 00:26:13.141 "seek_data": false, 00:26:13.141 "copy": true, 00:26:13.141 "nvme_iov_md": false 00:26:13.141 }, 00:26:13.141 "memory_domains": [ 00:26:13.141 { 00:26:13.141 "dma_device_id": "system", 00:26:13.141 "dma_device_type": 1 00:26:13.141 }, 00:26:13.141 { 00:26:13.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.141 "dma_device_type": 2 00:26:13.141 } 00:26:13.141 ], 00:26:13.141 "driver_specific": {} 00:26:13.141 } 00:26:13.141 ] 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:13.141 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.142 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.400 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.400 "name": "Existed_Raid", 00:26:13.400 "uuid": "1ec22cce-0f45-4712-aa60-6fdce071314b", 00:26:13.400 "strip_size_kb": 64, 00:26:13.400 "state": "configuring", 00:26:13.400 "raid_level": "concat", 00:26:13.400 "superblock": true, 00:26:13.400 "num_base_bdevs": 4, 00:26:13.400 "num_base_bdevs_discovered": 1, 00:26:13.400 "num_base_bdevs_operational": 4, 00:26:13.400 "base_bdevs_list": [ 00:26:13.400 { 00:26:13.400 "name": "BaseBdev1", 00:26:13.400 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:13.400 "is_configured": true, 00:26:13.400 "data_offset": 2048, 00:26:13.401 "data_size": 63488 00:26:13.401 }, 00:26:13.401 { 00:26:13.401 "name": "BaseBdev2", 00:26:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.401 "is_configured": false, 00:26:13.401 "data_offset": 0, 00:26:13.401 "data_size": 0 00:26:13.401 }, 00:26:13.401 { 00:26:13.401 "name": "BaseBdev3", 00:26:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.401 "is_configured": false, 00:26:13.401 "data_offset": 0, 00:26:13.401 "data_size": 0 00:26:13.401 }, 00:26:13.401 { 00:26:13.401 "name": "BaseBdev4", 00:26:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.401 "is_configured": false, 00:26:13.401 "data_offset": 0, 00:26:13.401 "data_size": 0 00:26:13.401 } 00:26:13.401 ] 00:26:13.401 }' 00:26:13.401 14:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.401 14:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:14.331 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:14.331 [2024-07-25 14:09:03.263629] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:14.331 [2024-07-25 14:09:03.263964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:26:14.331 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:14.587 [2024-07-25 14:09:03.527746] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.587 [2024-07-25 14:09:03.530076] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:14.587 [2024-07-25 14:09:03.530285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:14.587 [2024-07-25 14:09:03.530428] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:14.587 [2024-07-25 14:09:03.530499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:14.587 [2024-07-25 14:09:03.530722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:14.587 [2024-07-25 14:09:03.530788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.587 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.844 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.844 "name": "Existed_Raid", 00:26:14.844 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:14.844 "strip_size_kb": 64, 00:26:14.844 "state": "configuring", 00:26:14.844 "raid_level": "concat", 00:26:14.844 "superblock": true, 00:26:14.844 "num_base_bdevs": 4, 00:26:14.844 "num_base_bdevs_discovered": 1, 00:26:14.844 "num_base_bdevs_operational": 4, 00:26:14.844 "base_bdevs_list": [ 00:26:14.844 { 00:26:14.844 "name": "BaseBdev1", 00:26:14.844 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:14.844 "is_configured": true, 00:26:14.844 "data_offset": 2048, 00:26:14.844 "data_size": 63488 00:26:14.844 }, 00:26:14.844 { 00:26:14.844 "name": "BaseBdev2", 00:26:14.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.844 "is_configured": false, 00:26:14.844 "data_offset": 0, 00:26:14.844 "data_size": 0 00:26:14.844 }, 00:26:14.844 { 00:26:14.844 "name": "BaseBdev3", 00:26:14.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.844 "is_configured": false, 00:26:14.844 "data_offset": 0, 00:26:14.844 "data_size": 0 00:26:14.844 }, 00:26:14.844 { 00:26:14.844 "name": "BaseBdev4", 00:26:14.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.844 "is_configured": false, 00:26:14.844 "data_offset": 0, 00:26:14.844 "data_size": 0 00:26:14.844 } 00:26:14.844 ] 00:26:14.844 }' 00:26:14.844 14:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.844 14:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.408 14:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:15.974 [2024-07-25 14:09:04.714508] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:15.974 BaseBdev2 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:15.974 14:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.974 14:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:16.231 [ 00:26:16.231 { 00:26:16.231 "name": "BaseBdev2", 00:26:16.231 "aliases": [ 00:26:16.231 "09108d33-99f7-4eb4-94fe-ab728633c0e6" 00:26:16.231 ], 00:26:16.231 "product_name": "Malloc disk", 00:26:16.231 "block_size": 512, 00:26:16.232 "num_blocks": 65536, 00:26:16.232 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:16.232 "assigned_rate_limits": { 00:26:16.232 "rw_ios_per_sec": 0, 00:26:16.232 "rw_mbytes_per_sec": 0, 00:26:16.232 "r_mbytes_per_sec": 0, 00:26:16.232 "w_mbytes_per_sec": 0 00:26:16.232 }, 00:26:16.232 "claimed": true, 00:26:16.232 "claim_type": "exclusive_write", 00:26:16.232 "zoned": false, 00:26:16.232 "supported_io_types": { 00:26:16.232 "read": true, 00:26:16.232 "write": true, 00:26:16.232 "unmap": true, 00:26:16.232 "flush": true, 00:26:16.232 "reset": true, 00:26:16.232 "nvme_admin": false, 00:26:16.232 "nvme_io": false, 00:26:16.232 "nvme_io_md": false, 00:26:16.232 "write_zeroes": true, 00:26:16.232 "zcopy": true, 00:26:16.232 "get_zone_info": false, 00:26:16.232 "zone_management": false, 00:26:16.232 "zone_append": false, 00:26:16.232 "compare": false, 00:26:16.232 "compare_and_write": false, 00:26:16.232 "abort": true, 00:26:16.232 "seek_hole": false, 00:26:16.232 "seek_data": false, 00:26:16.232 "copy": true, 00:26:16.232 "nvme_iov_md": false 00:26:16.232 }, 00:26:16.232 "memory_domains": [ 00:26:16.232 { 00:26:16.232 "dma_device_id": "system", 00:26:16.232 "dma_device_type": 1 00:26:16.232 }, 00:26:16.232 { 00:26:16.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.232 "dma_device_type": 2 00:26:16.232 } 00:26:16.232 ], 00:26:16.232 "driver_specific": {} 00:26:16.232 } 00:26:16.232 ] 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.232 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.490 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.490 "name": "Existed_Raid", 00:26:16.490 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:16.490 "strip_size_kb": 64, 00:26:16.490 "state": "configuring", 00:26:16.490 "raid_level": "concat", 00:26:16.490 "superblock": true, 00:26:16.490 "num_base_bdevs": 4, 00:26:16.490 "num_base_bdevs_discovered": 2, 00:26:16.490 "num_base_bdevs_operational": 4, 00:26:16.490 "base_bdevs_list": [ 00:26:16.490 { 00:26:16.490 "name": "BaseBdev1", 00:26:16.490 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:16.490 "is_configured": true, 00:26:16.490 "data_offset": 2048, 00:26:16.490 "data_size": 63488 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "name": "BaseBdev2", 00:26:16.490 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:16.490 "is_configured": true, 00:26:16.490 "data_offset": 2048, 00:26:16.490 "data_size": 63488 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "name": "BaseBdev3", 00:26:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.490 "is_configured": false, 00:26:16.490 "data_offset": 0, 00:26:16.490 "data_size": 0 00:26:16.490 }, 00:26:16.490 { 00:26:16.490 "name": "BaseBdev4", 00:26:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.490 "is_configured": false, 00:26:16.490 "data_offset": 0, 00:26:16.490 "data_size": 0 00:26:16.490 } 00:26:16.490 ] 00:26:16.490 }' 00:26:16.490 14:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.490 14:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:17.422 [2024-07-25 14:09:06.375961] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:17.422 BaseBdev3 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:17.422 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:17.423 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:17.681 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:17.939 [ 00:26:17.939 { 00:26:17.939 "name": "BaseBdev3", 00:26:17.939 "aliases": [ 00:26:17.939 "8e845e69-dffa-42bc-acbd-0e7371f4ac10" 00:26:17.939 ], 00:26:17.939 "product_name": "Malloc disk", 00:26:17.939 "block_size": 512, 00:26:17.939 "num_blocks": 65536, 00:26:17.939 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:17.939 "assigned_rate_limits": { 00:26:17.939 "rw_ios_per_sec": 0, 00:26:17.939 "rw_mbytes_per_sec": 0, 00:26:17.939 "r_mbytes_per_sec": 0, 00:26:17.939 "w_mbytes_per_sec": 0 00:26:17.939 }, 00:26:17.939 "claimed": true, 00:26:17.939 "claim_type": "exclusive_write", 00:26:17.939 "zoned": false, 00:26:17.939 "supported_io_types": { 00:26:17.939 "read": true, 00:26:17.939 "write": true, 00:26:17.939 "unmap": true, 00:26:17.939 "flush": true, 00:26:17.939 "reset": true, 00:26:17.939 "nvme_admin": false, 00:26:17.939 "nvme_io": false, 00:26:17.939 "nvme_io_md": false, 00:26:17.939 "write_zeroes": true, 00:26:17.939 "zcopy": true, 00:26:17.939 "get_zone_info": false, 00:26:17.939 "zone_management": false, 00:26:17.939 "zone_append": false, 00:26:17.939 "compare": false, 00:26:17.939 "compare_and_write": false, 00:26:17.939 "abort": true, 00:26:17.939 "seek_hole": false, 00:26:17.939 "seek_data": false, 00:26:17.939 "copy": true, 00:26:17.939 "nvme_iov_md": false 00:26:17.939 }, 00:26:17.939 "memory_domains": [ 00:26:17.939 { 00:26:17.939 "dma_device_id": "system", 00:26:17.939 "dma_device_type": 1 00:26:17.939 }, 00:26:17.939 { 00:26:17.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.939 "dma_device_type": 2 00:26:17.939 } 00:26:17.939 ], 00:26:17.939 "driver_specific": {} 00:26:17.939 } 00:26:17.939 ] 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.939 14:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.197 14:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.197 "name": "Existed_Raid", 00:26:18.197 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:18.197 "strip_size_kb": 64, 00:26:18.197 "state": "configuring", 00:26:18.197 "raid_level": "concat", 00:26:18.197 "superblock": true, 00:26:18.197 "num_base_bdevs": 4, 00:26:18.197 "num_base_bdevs_discovered": 3, 00:26:18.197 "num_base_bdevs_operational": 4, 00:26:18.197 "base_bdevs_list": [ 00:26:18.197 { 00:26:18.197 "name": "BaseBdev1", 00:26:18.197 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:18.197 "is_configured": true, 00:26:18.197 "data_offset": 2048, 00:26:18.197 "data_size": 63488 00:26:18.197 }, 00:26:18.197 { 00:26:18.197 "name": "BaseBdev2", 00:26:18.197 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:18.197 "is_configured": true, 00:26:18.197 "data_offset": 2048, 00:26:18.197 "data_size": 63488 00:26:18.197 }, 00:26:18.197 { 00:26:18.197 "name": "BaseBdev3", 00:26:18.197 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:18.197 "is_configured": true, 00:26:18.197 "data_offset": 2048, 00:26:18.197 "data_size": 63488 00:26:18.197 }, 00:26:18.197 { 00:26:18.197 "name": "BaseBdev4", 00:26:18.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.197 "is_configured": false, 00:26:18.197 "data_offset": 0, 00:26:18.197 "data_size": 0 00:26:18.197 } 00:26:18.197 ] 00:26:18.197 }' 00:26:18.197 14:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.197 14:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.762 14:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:19.020 [2024-07-25 14:09:07.988221] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:19.020 [2024-07-25 14:09:07.988949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:26:19.020 [2024-07-25 14:09:07.989125] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:19.020 [2024-07-25 14:09:07.989496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:19.020 BaseBdev4 00:26:19.020 [2024-07-25 14:09:07.990231] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:26:19.020 [2024-07-25 14:09:07.990437] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:26:19.020 [2024-07-25 14:09:07.990784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:19.020 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:19.335 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:19.607 [ 00:26:19.607 { 00:26:19.607 "name": "BaseBdev4", 00:26:19.607 "aliases": [ 00:26:19.607 "0d7432c2-cd35-4f10-8729-64aa5ed2fb83" 00:26:19.607 ], 00:26:19.607 "product_name": "Malloc disk", 00:26:19.607 "block_size": 512, 00:26:19.607 "num_blocks": 65536, 00:26:19.607 "uuid": "0d7432c2-cd35-4f10-8729-64aa5ed2fb83", 00:26:19.607 "assigned_rate_limits": { 00:26:19.607 "rw_ios_per_sec": 0, 00:26:19.607 "rw_mbytes_per_sec": 0, 00:26:19.607 "r_mbytes_per_sec": 0, 00:26:19.607 "w_mbytes_per_sec": 0 00:26:19.607 }, 00:26:19.607 "claimed": true, 00:26:19.607 "claim_type": "exclusive_write", 00:26:19.607 "zoned": false, 00:26:19.607 "supported_io_types": { 00:26:19.607 "read": true, 00:26:19.607 "write": true, 00:26:19.607 "unmap": true, 00:26:19.607 "flush": true, 00:26:19.607 "reset": true, 00:26:19.607 "nvme_admin": false, 00:26:19.607 "nvme_io": false, 00:26:19.607 "nvme_io_md": false, 00:26:19.607 "write_zeroes": true, 00:26:19.607 "zcopy": true, 00:26:19.607 "get_zone_info": false, 00:26:19.607 "zone_management": false, 00:26:19.607 "zone_append": false, 00:26:19.607 "compare": false, 00:26:19.607 "compare_and_write": false, 00:26:19.607 "abort": true, 00:26:19.607 "seek_hole": false, 00:26:19.607 "seek_data": false, 00:26:19.607 "copy": true, 00:26:19.607 "nvme_iov_md": false 00:26:19.607 }, 00:26:19.607 "memory_domains": [ 00:26:19.607 { 00:26:19.607 "dma_device_id": "system", 00:26:19.607 "dma_device_type": 1 00:26:19.607 }, 00:26:19.607 { 00:26:19.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.607 "dma_device_type": 2 00:26:19.607 } 00:26:19.607 ], 00:26:19.607 "driver_specific": {} 00:26:19.607 } 00:26:19.607 ] 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.607 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.865 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.865 "name": "Existed_Raid", 00:26:19.865 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:19.865 "strip_size_kb": 64, 00:26:19.865 "state": "online", 00:26:19.865 "raid_level": "concat", 00:26:19.865 "superblock": true, 00:26:19.865 "num_base_bdevs": 4, 00:26:19.865 "num_base_bdevs_discovered": 4, 00:26:19.865 "num_base_bdevs_operational": 4, 00:26:19.865 "base_bdevs_list": [ 00:26:19.865 { 00:26:19.865 "name": "BaseBdev1", 00:26:19.865 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:19.865 "is_configured": true, 00:26:19.865 "data_offset": 2048, 00:26:19.865 "data_size": 63488 00:26:19.865 }, 00:26:19.865 { 00:26:19.865 "name": "BaseBdev2", 00:26:19.865 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:19.865 "is_configured": true, 00:26:19.865 "data_offset": 2048, 00:26:19.865 "data_size": 63488 00:26:19.865 }, 00:26:19.865 { 00:26:19.865 "name": "BaseBdev3", 00:26:19.865 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:19.865 "is_configured": true, 00:26:19.865 "data_offset": 2048, 00:26:19.865 "data_size": 63488 00:26:19.865 }, 00:26:19.865 { 00:26:19.866 "name": "BaseBdev4", 00:26:19.866 "uuid": "0d7432c2-cd35-4f10-8729-64aa5ed2fb83", 00:26:19.866 "is_configured": true, 00:26:19.866 "data_offset": 2048, 00:26:19.866 "data_size": 63488 00:26:19.866 } 00:26:19.866 ] 00:26:19.866 }' 00:26:19.866 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.866 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:20.433 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:20.691 [2024-07-25 14:09:09.617140] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:20.691 "name": "Existed_Raid", 00:26:20.691 "aliases": [ 00:26:20.691 "5817e9a1-256a-4b5d-a38a-c8a88bb0d829" 00:26:20.691 ], 00:26:20.691 "product_name": "Raid Volume", 00:26:20.691 "block_size": 512, 00:26:20.691 "num_blocks": 253952, 00:26:20.691 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:20.691 "assigned_rate_limits": { 00:26:20.691 "rw_ios_per_sec": 0, 00:26:20.691 "rw_mbytes_per_sec": 0, 00:26:20.691 "r_mbytes_per_sec": 0, 00:26:20.691 "w_mbytes_per_sec": 0 00:26:20.691 }, 00:26:20.691 "claimed": false, 00:26:20.691 "zoned": false, 00:26:20.691 "supported_io_types": { 00:26:20.691 "read": true, 00:26:20.691 "write": true, 00:26:20.691 "unmap": true, 00:26:20.691 "flush": true, 00:26:20.691 "reset": true, 00:26:20.691 "nvme_admin": false, 00:26:20.691 "nvme_io": false, 00:26:20.691 "nvme_io_md": false, 00:26:20.691 "write_zeroes": true, 00:26:20.691 "zcopy": false, 00:26:20.691 "get_zone_info": false, 00:26:20.691 "zone_management": false, 00:26:20.691 "zone_append": false, 00:26:20.691 "compare": false, 00:26:20.691 "compare_and_write": false, 00:26:20.691 "abort": false, 00:26:20.691 "seek_hole": false, 00:26:20.691 "seek_data": false, 00:26:20.691 "copy": false, 00:26:20.691 "nvme_iov_md": false 00:26:20.691 }, 00:26:20.691 "memory_domains": [ 00:26:20.691 { 00:26:20.691 "dma_device_id": "system", 00:26:20.691 "dma_device_type": 1 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.691 "dma_device_type": 2 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "system", 00:26:20.691 "dma_device_type": 1 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.691 "dma_device_type": 2 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "system", 00:26:20.691 "dma_device_type": 1 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.691 "dma_device_type": 2 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "system", 00:26:20.691 "dma_device_type": 1 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.691 "dma_device_type": 2 00:26:20.691 } 00:26:20.691 ], 00:26:20.691 "driver_specific": { 00:26:20.691 "raid": { 00:26:20.691 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:20.691 "strip_size_kb": 64, 00:26:20.691 "state": "online", 00:26:20.691 "raid_level": "concat", 00:26:20.691 "superblock": true, 00:26:20.691 "num_base_bdevs": 4, 00:26:20.691 "num_base_bdevs_discovered": 4, 00:26:20.691 "num_base_bdevs_operational": 4, 00:26:20.691 "base_bdevs_list": [ 00:26:20.691 { 00:26:20.691 "name": "BaseBdev1", 00:26:20.691 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:20.691 "is_configured": true, 00:26:20.691 "data_offset": 2048, 00:26:20.691 "data_size": 63488 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "name": "BaseBdev2", 00:26:20.691 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:20.691 "is_configured": true, 00:26:20.691 "data_offset": 2048, 00:26:20.691 "data_size": 63488 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "name": "BaseBdev3", 00:26:20.691 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:20.691 "is_configured": true, 00:26:20.691 "data_offset": 2048, 00:26:20.691 "data_size": 63488 00:26:20.691 }, 00:26:20.691 { 00:26:20.691 "name": "BaseBdev4", 00:26:20.691 "uuid": "0d7432c2-cd35-4f10-8729-64aa5ed2fb83", 00:26:20.691 "is_configured": true, 00:26:20.691 "data_offset": 2048, 00:26:20.691 "data_size": 63488 00:26:20.691 } 00:26:20.691 ] 00:26:20.691 } 00:26:20.691 } 00:26:20.691 }' 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:20.691 BaseBdev2 00:26:20.691 BaseBdev3 00:26:20.691 BaseBdev4' 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:20.691 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:20.949 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:20.949 "name": "BaseBdev1", 00:26:20.949 "aliases": [ 00:26:20.949 "3e95c590-7122-424d-86a1-f3d7241b4fbf" 00:26:20.949 ], 00:26:20.949 "product_name": "Malloc disk", 00:26:20.949 "block_size": 512, 00:26:20.949 "num_blocks": 65536, 00:26:20.949 "uuid": "3e95c590-7122-424d-86a1-f3d7241b4fbf", 00:26:20.949 "assigned_rate_limits": { 00:26:20.949 "rw_ios_per_sec": 0, 00:26:20.949 "rw_mbytes_per_sec": 0, 00:26:20.949 "r_mbytes_per_sec": 0, 00:26:20.949 "w_mbytes_per_sec": 0 00:26:20.949 }, 00:26:20.949 "claimed": true, 00:26:20.949 "claim_type": "exclusive_write", 00:26:20.949 "zoned": false, 00:26:20.949 "supported_io_types": { 00:26:20.949 "read": true, 00:26:20.949 "write": true, 00:26:20.949 "unmap": true, 00:26:20.949 "flush": true, 00:26:20.949 "reset": true, 00:26:20.949 "nvme_admin": false, 00:26:20.949 "nvme_io": false, 00:26:20.949 "nvme_io_md": false, 00:26:20.949 "write_zeroes": true, 00:26:20.949 "zcopy": true, 00:26:20.949 "get_zone_info": false, 00:26:20.949 "zone_management": false, 00:26:20.949 "zone_append": false, 00:26:20.949 "compare": false, 00:26:20.949 "compare_and_write": false, 00:26:20.949 "abort": true, 00:26:20.949 "seek_hole": false, 00:26:20.949 "seek_data": false, 00:26:20.949 "copy": true, 00:26:20.949 "nvme_iov_md": false 00:26:20.949 }, 00:26:20.949 "memory_domains": [ 00:26:20.949 { 00:26:20.949 "dma_device_id": "system", 00:26:20.949 "dma_device_type": 1 00:26:20.949 }, 00:26:20.949 { 00:26:20.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.949 "dma_device_type": 2 00:26:20.949 } 00:26:20.949 ], 00:26:20.949 "driver_specific": {} 00:26:20.949 }' 00:26:20.949 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.949 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.207 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.465 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.465 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.465 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:21.465 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:21.465 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:21.723 "name": "BaseBdev2", 00:26:21.723 "aliases": [ 00:26:21.723 "09108d33-99f7-4eb4-94fe-ab728633c0e6" 00:26:21.723 ], 00:26:21.723 "product_name": "Malloc disk", 00:26:21.723 "block_size": 512, 00:26:21.723 "num_blocks": 65536, 00:26:21.723 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:21.723 "assigned_rate_limits": { 00:26:21.723 "rw_ios_per_sec": 0, 00:26:21.723 "rw_mbytes_per_sec": 0, 00:26:21.723 "r_mbytes_per_sec": 0, 00:26:21.723 "w_mbytes_per_sec": 0 00:26:21.723 }, 00:26:21.723 "claimed": true, 00:26:21.723 "claim_type": "exclusive_write", 00:26:21.723 "zoned": false, 00:26:21.723 "supported_io_types": { 00:26:21.723 "read": true, 00:26:21.723 "write": true, 00:26:21.723 "unmap": true, 00:26:21.723 "flush": true, 00:26:21.723 "reset": true, 00:26:21.723 "nvme_admin": false, 00:26:21.723 "nvme_io": false, 00:26:21.723 "nvme_io_md": false, 00:26:21.723 "write_zeroes": true, 00:26:21.723 "zcopy": true, 00:26:21.723 "get_zone_info": false, 00:26:21.723 "zone_management": false, 00:26:21.723 "zone_append": false, 00:26:21.723 "compare": false, 00:26:21.723 "compare_and_write": false, 00:26:21.723 "abort": true, 00:26:21.723 "seek_hole": false, 00:26:21.723 "seek_data": false, 00:26:21.723 "copy": true, 00:26:21.723 "nvme_iov_md": false 00:26:21.723 }, 00:26:21.723 "memory_domains": [ 00:26:21.723 { 00:26:21.723 "dma_device_id": "system", 00:26:21.723 "dma_device_type": 1 00:26:21.723 }, 00:26:21.723 { 00:26:21.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.723 "dma_device_type": 2 00:26:21.723 } 00:26:21.723 ], 00:26:21.723 "driver_specific": {} 00:26:21.723 }' 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:21.723 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.981 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.981 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.981 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.981 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.982 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.982 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:21.982 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:21.982 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:22.240 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:22.240 "name": "BaseBdev3", 00:26:22.240 "aliases": [ 00:26:22.240 "8e845e69-dffa-42bc-acbd-0e7371f4ac10" 00:26:22.240 ], 00:26:22.240 "product_name": "Malloc disk", 00:26:22.240 "block_size": 512, 00:26:22.240 "num_blocks": 65536, 00:26:22.240 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:22.240 "assigned_rate_limits": { 00:26:22.240 "rw_ios_per_sec": 0, 00:26:22.240 "rw_mbytes_per_sec": 0, 00:26:22.240 "r_mbytes_per_sec": 0, 00:26:22.240 "w_mbytes_per_sec": 0 00:26:22.240 }, 00:26:22.240 "claimed": true, 00:26:22.240 "claim_type": "exclusive_write", 00:26:22.240 "zoned": false, 00:26:22.240 "supported_io_types": { 00:26:22.240 "read": true, 00:26:22.240 "write": true, 00:26:22.240 "unmap": true, 00:26:22.240 "flush": true, 00:26:22.240 "reset": true, 00:26:22.240 "nvme_admin": false, 00:26:22.240 "nvme_io": false, 00:26:22.240 "nvme_io_md": false, 00:26:22.240 "write_zeroes": true, 00:26:22.240 "zcopy": true, 00:26:22.240 "get_zone_info": false, 00:26:22.240 "zone_management": false, 00:26:22.240 "zone_append": false, 00:26:22.240 "compare": false, 00:26:22.240 "compare_and_write": false, 00:26:22.240 "abort": true, 00:26:22.240 "seek_hole": false, 00:26:22.240 "seek_data": false, 00:26:22.240 "copy": true, 00:26:22.240 "nvme_iov_md": false 00:26:22.240 }, 00:26:22.240 "memory_domains": [ 00:26:22.240 { 00:26:22.240 "dma_device_id": "system", 00:26:22.240 "dma_device_type": 1 00:26:22.240 }, 00:26:22.240 { 00:26:22.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.240 "dma_device_type": 2 00:26:22.240 } 00:26:22.240 ], 00:26:22.240 "driver_specific": {} 00:26:22.240 }' 00:26:22.240 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.240 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:22.240 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:22.240 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.498 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:22.765 "name": "BaseBdev4", 00:26:22.765 "aliases": [ 00:26:22.765 "0d7432c2-cd35-4f10-8729-64aa5ed2fb83" 00:26:22.765 ], 00:26:22.765 "product_name": "Malloc disk", 00:26:22.765 "block_size": 512, 00:26:22.765 "num_blocks": 65536, 00:26:22.765 "uuid": "0d7432c2-cd35-4f10-8729-64aa5ed2fb83", 00:26:22.765 "assigned_rate_limits": { 00:26:22.765 "rw_ios_per_sec": 0, 00:26:22.765 "rw_mbytes_per_sec": 0, 00:26:22.765 "r_mbytes_per_sec": 0, 00:26:22.765 "w_mbytes_per_sec": 0 00:26:22.765 }, 00:26:22.765 "claimed": true, 00:26:22.765 "claim_type": "exclusive_write", 00:26:22.765 "zoned": false, 00:26:22.765 "supported_io_types": { 00:26:22.765 "read": true, 00:26:22.765 "write": true, 00:26:22.765 "unmap": true, 00:26:22.765 "flush": true, 00:26:22.765 "reset": true, 00:26:22.765 "nvme_admin": false, 00:26:22.765 "nvme_io": false, 00:26:22.765 "nvme_io_md": false, 00:26:22.765 "write_zeroes": true, 00:26:22.765 "zcopy": true, 00:26:22.765 "get_zone_info": false, 00:26:22.765 "zone_management": false, 00:26:22.765 "zone_append": false, 00:26:22.765 "compare": false, 00:26:22.765 "compare_and_write": false, 00:26:22.765 "abort": true, 00:26:22.765 "seek_hole": false, 00:26:22.765 "seek_data": false, 00:26:22.765 "copy": true, 00:26:22.765 "nvme_iov_md": false 00:26:22.765 }, 00:26:22.765 "memory_domains": [ 00:26:22.765 { 00:26:22.765 "dma_device_id": "system", 00:26:22.765 "dma_device_type": 1 00:26:22.765 }, 00:26:22.765 { 00:26:22.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.765 "dma_device_type": 2 00:26:22.765 } 00:26:22.765 ], 00:26:22.765 "driver_specific": {} 00:26:22.765 }' 00:26:22.765 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:23.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.033 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.033 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:23.033 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.291 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.291 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:23.291 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:23.549 [2024-07-25 14:09:12.394000] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:23.549 [2024-07-25 14:09:12.394252] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:23.549 [2024-07-25 14:09:12.394551] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.549 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.808 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.808 "name": "Existed_Raid", 00:26:23.808 "uuid": "5817e9a1-256a-4b5d-a38a-c8a88bb0d829", 00:26:23.808 "strip_size_kb": 64, 00:26:23.808 "state": "offline", 00:26:23.808 "raid_level": "concat", 00:26:23.808 "superblock": true, 00:26:23.808 "num_base_bdevs": 4, 00:26:23.808 "num_base_bdevs_discovered": 3, 00:26:23.808 "num_base_bdevs_operational": 3, 00:26:23.808 "base_bdevs_list": [ 00:26:23.808 { 00:26:23.808 "name": null, 00:26:23.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.808 "is_configured": false, 00:26:23.808 "data_offset": 2048, 00:26:23.808 "data_size": 63488 00:26:23.808 }, 00:26:23.808 { 00:26:23.808 "name": "BaseBdev2", 00:26:23.808 "uuid": "09108d33-99f7-4eb4-94fe-ab728633c0e6", 00:26:23.808 "is_configured": true, 00:26:23.808 "data_offset": 2048, 00:26:23.808 "data_size": 63488 00:26:23.808 }, 00:26:23.808 { 00:26:23.808 "name": "BaseBdev3", 00:26:23.808 "uuid": "8e845e69-dffa-42bc-acbd-0e7371f4ac10", 00:26:23.808 "is_configured": true, 00:26:23.808 "data_offset": 2048, 00:26:23.808 "data_size": 63488 00:26:23.808 }, 00:26:23.808 { 00:26:23.808 "name": "BaseBdev4", 00:26:23.808 "uuid": "0d7432c2-cd35-4f10-8729-64aa5ed2fb83", 00:26:23.808 "is_configured": true, 00:26:23.808 "data_offset": 2048, 00:26:23.808 "data_size": 63488 00:26:23.808 } 00:26:23.808 ] 00:26:23.808 }' 00:26:23.808 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.808 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.375 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:24.375 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:24.375 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.375 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:24.634 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:24.634 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:24.634 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:25.200 [2024-07-25 14:09:13.936202] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:25.200 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:25.200 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:25.200 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.200 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:25.459 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:25.459 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:25.459 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:25.717 [2024-07-25 14:09:14.602377] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:25.717 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:25.717 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:25.717 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:25.717 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.977 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:25.977 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:25.977 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:26.544 [2024-07-25 14:09:15.279734] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:26.544 [2024-07-25 14:09:15.280102] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:26:26.544 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:26.544 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:26.544 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.544 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:26.803 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:27.062 BaseBdev2 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:27.062 14:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.319 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:27.577 [ 00:26:27.577 { 00:26:27.577 "name": "BaseBdev2", 00:26:27.577 "aliases": [ 00:26:27.577 "401d6988-ca35-4bc1-a766-66b030f1bef0" 00:26:27.577 ], 00:26:27.577 "product_name": "Malloc disk", 00:26:27.577 "block_size": 512, 00:26:27.577 "num_blocks": 65536, 00:26:27.577 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:27.577 "assigned_rate_limits": { 00:26:27.577 "rw_ios_per_sec": 0, 00:26:27.577 "rw_mbytes_per_sec": 0, 00:26:27.577 "r_mbytes_per_sec": 0, 00:26:27.577 "w_mbytes_per_sec": 0 00:26:27.577 }, 00:26:27.577 "claimed": false, 00:26:27.577 "zoned": false, 00:26:27.577 "supported_io_types": { 00:26:27.577 "read": true, 00:26:27.577 "write": true, 00:26:27.577 "unmap": true, 00:26:27.577 "flush": true, 00:26:27.577 "reset": true, 00:26:27.577 "nvme_admin": false, 00:26:27.577 "nvme_io": false, 00:26:27.577 "nvme_io_md": false, 00:26:27.577 "write_zeroes": true, 00:26:27.577 "zcopy": true, 00:26:27.577 "get_zone_info": false, 00:26:27.577 "zone_management": false, 00:26:27.577 "zone_append": false, 00:26:27.577 "compare": false, 00:26:27.577 "compare_and_write": false, 00:26:27.577 "abort": true, 00:26:27.577 "seek_hole": false, 00:26:27.577 "seek_data": false, 00:26:27.577 "copy": true, 00:26:27.577 "nvme_iov_md": false 00:26:27.577 }, 00:26:27.577 "memory_domains": [ 00:26:27.577 { 00:26:27.577 "dma_device_id": "system", 00:26:27.577 "dma_device_type": 1 00:26:27.577 }, 00:26:27.577 { 00:26:27.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.577 "dma_device_type": 2 00:26:27.577 } 00:26:27.577 ], 00:26:27.577 "driver_specific": {} 00:26:27.577 } 00:26:27.577 ] 00:26:27.577 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:27.578 14:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:27.578 14:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:27.578 14:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:28.143 BaseBdev3 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:28.143 14:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:28.143 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:28.401 [ 00:26:28.401 { 00:26:28.401 "name": "BaseBdev3", 00:26:28.401 "aliases": [ 00:26:28.401 "9b631e1d-90de-4ef1-972a-5fd2cb102bc7" 00:26:28.401 ], 00:26:28.401 "product_name": "Malloc disk", 00:26:28.401 "block_size": 512, 00:26:28.401 "num_blocks": 65536, 00:26:28.401 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:28.401 "assigned_rate_limits": { 00:26:28.401 "rw_ios_per_sec": 0, 00:26:28.401 "rw_mbytes_per_sec": 0, 00:26:28.401 "r_mbytes_per_sec": 0, 00:26:28.401 "w_mbytes_per_sec": 0 00:26:28.401 }, 00:26:28.401 "claimed": false, 00:26:28.401 "zoned": false, 00:26:28.401 "supported_io_types": { 00:26:28.401 "read": true, 00:26:28.401 "write": true, 00:26:28.401 "unmap": true, 00:26:28.401 "flush": true, 00:26:28.401 "reset": true, 00:26:28.401 "nvme_admin": false, 00:26:28.401 "nvme_io": false, 00:26:28.401 "nvme_io_md": false, 00:26:28.401 "write_zeroes": true, 00:26:28.401 "zcopy": true, 00:26:28.401 "get_zone_info": false, 00:26:28.401 "zone_management": false, 00:26:28.401 "zone_append": false, 00:26:28.401 "compare": false, 00:26:28.401 "compare_and_write": false, 00:26:28.401 "abort": true, 00:26:28.401 "seek_hole": false, 00:26:28.401 "seek_data": false, 00:26:28.401 "copy": true, 00:26:28.401 "nvme_iov_md": false 00:26:28.401 }, 00:26:28.401 "memory_domains": [ 00:26:28.401 { 00:26:28.401 "dma_device_id": "system", 00:26:28.401 "dma_device_type": 1 00:26:28.401 }, 00:26:28.401 { 00:26:28.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.401 "dma_device_type": 2 00:26:28.401 } 00:26:28.401 ], 00:26:28.401 "driver_specific": {} 00:26:28.401 } 00:26:28.401 ] 00:26:28.401 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:28.401 14:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:28.401 14:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:28.401 14:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:28.967 BaseBdev4 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:28.967 14:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:29.224 14:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:29.482 [ 00:26:29.482 { 00:26:29.482 "name": "BaseBdev4", 00:26:29.482 "aliases": [ 00:26:29.482 "906ee607-d527-4e9e-8560-a8d6e0ca78f6" 00:26:29.482 ], 00:26:29.482 "product_name": "Malloc disk", 00:26:29.482 "block_size": 512, 00:26:29.482 "num_blocks": 65536, 00:26:29.482 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:29.482 "assigned_rate_limits": { 00:26:29.482 "rw_ios_per_sec": 0, 00:26:29.482 "rw_mbytes_per_sec": 0, 00:26:29.482 "r_mbytes_per_sec": 0, 00:26:29.482 "w_mbytes_per_sec": 0 00:26:29.482 }, 00:26:29.482 "claimed": false, 00:26:29.482 "zoned": false, 00:26:29.482 "supported_io_types": { 00:26:29.482 "read": true, 00:26:29.482 "write": true, 00:26:29.482 "unmap": true, 00:26:29.482 "flush": true, 00:26:29.482 "reset": true, 00:26:29.482 "nvme_admin": false, 00:26:29.482 "nvme_io": false, 00:26:29.482 "nvme_io_md": false, 00:26:29.482 "write_zeroes": true, 00:26:29.482 "zcopy": true, 00:26:29.482 "get_zone_info": false, 00:26:29.482 "zone_management": false, 00:26:29.482 "zone_append": false, 00:26:29.482 "compare": false, 00:26:29.482 "compare_and_write": false, 00:26:29.482 "abort": true, 00:26:29.482 "seek_hole": false, 00:26:29.482 "seek_data": false, 00:26:29.482 "copy": true, 00:26:29.482 "nvme_iov_md": false 00:26:29.482 }, 00:26:29.482 "memory_domains": [ 00:26:29.482 { 00:26:29.482 "dma_device_id": "system", 00:26:29.482 "dma_device_type": 1 00:26:29.482 }, 00:26:29.482 { 00:26:29.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.482 "dma_device_type": 2 00:26:29.482 } 00:26:29.482 ], 00:26:29.482 "driver_specific": {} 00:26:29.482 } 00:26:29.482 ] 00:26:29.482 14:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:29.482 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:29.482 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:29.482 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:29.740 [2024-07-25 14:09:18.578877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:29.740 [2024-07-25 14:09:18.579326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:29.740 [2024-07-25 14:09:18.579558] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:29.740 [2024-07-25 14:09:18.581905] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.740 [2024-07-25 14:09:18.582193] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.740 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.997 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.998 "name": "Existed_Raid", 00:26:29.998 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:29.998 "strip_size_kb": 64, 00:26:29.998 "state": "configuring", 00:26:29.998 "raid_level": "concat", 00:26:29.998 "superblock": true, 00:26:29.998 "num_base_bdevs": 4, 00:26:29.998 "num_base_bdevs_discovered": 3, 00:26:29.998 "num_base_bdevs_operational": 4, 00:26:29.998 "base_bdevs_list": [ 00:26:29.998 { 00:26:29.998 "name": "BaseBdev1", 00:26:29.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.998 "is_configured": false, 00:26:29.998 "data_offset": 0, 00:26:29.998 "data_size": 0 00:26:29.998 }, 00:26:29.998 { 00:26:29.998 "name": "BaseBdev2", 00:26:29.998 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:29.998 "is_configured": true, 00:26:29.998 "data_offset": 2048, 00:26:29.998 "data_size": 63488 00:26:29.998 }, 00:26:29.998 { 00:26:29.998 "name": "BaseBdev3", 00:26:29.998 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:29.998 "is_configured": true, 00:26:29.998 "data_offset": 2048, 00:26:29.998 "data_size": 63488 00:26:29.998 }, 00:26:29.998 { 00:26:29.998 "name": "BaseBdev4", 00:26:29.998 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:29.998 "is_configured": true, 00:26:29.998 "data_offset": 2048, 00:26:29.998 "data_size": 63488 00:26:29.998 } 00:26:29.998 ] 00:26:29.998 }' 00:26:29.998 14:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.998 14:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.563 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:30.820 [2024-07-25 14:09:19.703918] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.820 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.078 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.078 "name": "Existed_Raid", 00:26:31.078 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:31.078 "strip_size_kb": 64, 00:26:31.078 "state": "configuring", 00:26:31.078 "raid_level": "concat", 00:26:31.078 "superblock": true, 00:26:31.078 "num_base_bdevs": 4, 00:26:31.078 "num_base_bdevs_discovered": 2, 00:26:31.078 "num_base_bdevs_operational": 4, 00:26:31.078 "base_bdevs_list": [ 00:26:31.078 { 00:26:31.078 "name": "BaseBdev1", 00:26:31.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.078 "is_configured": false, 00:26:31.078 "data_offset": 0, 00:26:31.078 "data_size": 0 00:26:31.078 }, 00:26:31.078 { 00:26:31.078 "name": null, 00:26:31.078 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:31.078 "is_configured": false, 00:26:31.078 "data_offset": 2048, 00:26:31.078 "data_size": 63488 00:26:31.078 }, 00:26:31.078 { 00:26:31.078 "name": "BaseBdev3", 00:26:31.078 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:31.078 "is_configured": true, 00:26:31.078 "data_offset": 2048, 00:26:31.078 "data_size": 63488 00:26:31.078 }, 00:26:31.078 { 00:26:31.078 "name": "BaseBdev4", 00:26:31.078 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:31.078 "is_configured": true, 00:26:31.078 "data_offset": 2048, 00:26:31.078 "data_size": 63488 00:26:31.078 } 00:26:31.078 ] 00:26:31.078 }' 00:26:31.078 14:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.078 14:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.644 14:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.644 14:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:32.209 14:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:32.209 14:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:32.209 [2024-07-25 14:09:21.227515] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:32.209 BaseBdev1 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:32.209 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:32.774 [ 00:26:32.774 { 00:26:32.774 "name": "BaseBdev1", 00:26:32.774 "aliases": [ 00:26:32.774 "223b9b9c-0eea-4adf-bb1f-52de20811e46" 00:26:32.774 ], 00:26:32.774 "product_name": "Malloc disk", 00:26:32.774 "block_size": 512, 00:26:32.774 "num_blocks": 65536, 00:26:32.774 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:32.774 "assigned_rate_limits": { 00:26:32.774 "rw_ios_per_sec": 0, 00:26:32.774 "rw_mbytes_per_sec": 0, 00:26:32.774 "r_mbytes_per_sec": 0, 00:26:32.774 "w_mbytes_per_sec": 0 00:26:32.774 }, 00:26:32.774 "claimed": true, 00:26:32.774 "claim_type": "exclusive_write", 00:26:32.774 "zoned": false, 00:26:32.774 "supported_io_types": { 00:26:32.774 "read": true, 00:26:32.774 "write": true, 00:26:32.774 "unmap": true, 00:26:32.774 "flush": true, 00:26:32.774 "reset": true, 00:26:32.774 "nvme_admin": false, 00:26:32.774 "nvme_io": false, 00:26:32.774 "nvme_io_md": false, 00:26:32.774 "write_zeroes": true, 00:26:32.774 "zcopy": true, 00:26:32.774 "get_zone_info": false, 00:26:32.774 "zone_management": false, 00:26:32.774 "zone_append": false, 00:26:32.774 "compare": false, 00:26:32.774 "compare_and_write": false, 00:26:32.774 "abort": true, 00:26:32.774 "seek_hole": false, 00:26:32.774 "seek_data": false, 00:26:32.774 "copy": true, 00:26:32.774 "nvme_iov_md": false 00:26:32.774 }, 00:26:32.774 "memory_domains": [ 00:26:32.774 { 00:26:32.774 "dma_device_id": "system", 00:26:32.774 "dma_device_type": 1 00:26:32.774 }, 00:26:32.774 { 00:26:32.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.774 "dma_device_type": 2 00:26:32.774 } 00:26:32.774 ], 00:26:32.774 "driver_specific": {} 00:26:32.774 } 00:26:32.774 ] 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.774 14:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.339 14:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.339 "name": "Existed_Raid", 00:26:33.339 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:33.339 "strip_size_kb": 64, 00:26:33.339 "state": "configuring", 00:26:33.339 "raid_level": "concat", 00:26:33.339 "superblock": true, 00:26:33.339 "num_base_bdevs": 4, 00:26:33.339 "num_base_bdevs_discovered": 3, 00:26:33.339 "num_base_bdevs_operational": 4, 00:26:33.339 "base_bdevs_list": [ 00:26:33.339 { 00:26:33.339 "name": "BaseBdev1", 00:26:33.339 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:33.339 "is_configured": true, 00:26:33.339 "data_offset": 2048, 00:26:33.339 "data_size": 63488 00:26:33.339 }, 00:26:33.339 { 00:26:33.339 "name": null, 00:26:33.339 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:33.339 "is_configured": false, 00:26:33.339 "data_offset": 2048, 00:26:33.339 "data_size": 63488 00:26:33.339 }, 00:26:33.339 { 00:26:33.339 "name": "BaseBdev3", 00:26:33.339 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:33.339 "is_configured": true, 00:26:33.339 "data_offset": 2048, 00:26:33.339 "data_size": 63488 00:26:33.339 }, 00:26:33.339 { 00:26:33.339 "name": "BaseBdev4", 00:26:33.339 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:33.339 "is_configured": true, 00:26:33.339 "data_offset": 2048, 00:26:33.339 "data_size": 63488 00:26:33.339 } 00:26:33.339 ] 00:26:33.339 }' 00:26:33.339 14:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.339 14:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.904 14:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:33.904 14:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.162 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:34.162 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:34.420 [2024-07-25 14:09:23.372202] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:34.420 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.678 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.678 "name": "Existed_Raid", 00:26:34.678 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:34.678 "strip_size_kb": 64, 00:26:34.678 "state": "configuring", 00:26:34.678 "raid_level": "concat", 00:26:34.678 "superblock": true, 00:26:34.678 "num_base_bdevs": 4, 00:26:34.678 "num_base_bdevs_discovered": 2, 00:26:34.678 "num_base_bdevs_operational": 4, 00:26:34.678 "base_bdevs_list": [ 00:26:34.678 { 00:26:34.678 "name": "BaseBdev1", 00:26:34.678 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:34.678 "is_configured": true, 00:26:34.678 "data_offset": 2048, 00:26:34.678 "data_size": 63488 00:26:34.678 }, 00:26:34.678 { 00:26:34.678 "name": null, 00:26:34.678 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:34.678 "is_configured": false, 00:26:34.678 "data_offset": 2048, 00:26:34.678 "data_size": 63488 00:26:34.678 }, 00:26:34.678 { 00:26:34.678 "name": null, 00:26:34.678 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:34.678 "is_configured": false, 00:26:34.678 "data_offset": 2048, 00:26:34.678 "data_size": 63488 00:26:34.678 }, 00:26:34.678 { 00:26:34.678 "name": "BaseBdev4", 00:26:34.678 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:34.678 "is_configured": true, 00:26:34.678 "data_offset": 2048, 00:26:34.678 "data_size": 63488 00:26:34.678 } 00:26:34.678 ] 00:26:34.678 }' 00:26:34.678 14:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.678 14:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:35.612 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:35.612 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.612 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:35.612 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:35.870 [2024-07-25 14:09:24.816771] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:35.870 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.871 14:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.129 14:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.129 "name": "Existed_Raid", 00:26:36.129 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:36.129 "strip_size_kb": 64, 00:26:36.129 "state": "configuring", 00:26:36.129 "raid_level": "concat", 00:26:36.129 "superblock": true, 00:26:36.129 "num_base_bdevs": 4, 00:26:36.129 "num_base_bdevs_discovered": 3, 00:26:36.129 "num_base_bdevs_operational": 4, 00:26:36.129 "base_bdevs_list": [ 00:26:36.129 { 00:26:36.129 "name": "BaseBdev1", 00:26:36.129 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:36.129 "is_configured": true, 00:26:36.129 "data_offset": 2048, 00:26:36.129 "data_size": 63488 00:26:36.129 }, 00:26:36.129 { 00:26:36.129 "name": null, 00:26:36.129 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:36.129 "is_configured": false, 00:26:36.129 "data_offset": 2048, 00:26:36.129 "data_size": 63488 00:26:36.129 }, 00:26:36.129 { 00:26:36.129 "name": "BaseBdev3", 00:26:36.129 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:36.129 "is_configured": true, 00:26:36.129 "data_offset": 2048, 00:26:36.129 "data_size": 63488 00:26:36.129 }, 00:26:36.129 { 00:26:36.129 "name": "BaseBdev4", 00:26:36.129 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:36.129 "is_configured": true, 00:26:36.129 "data_offset": 2048, 00:26:36.129 "data_size": 63488 00:26:36.129 } 00:26:36.129 ] 00:26:36.129 }' 00:26:36.129 14:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.129 14:09:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.731 14:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.731 14:09:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:37.297 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:37.297 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:37.297 [2024-07-25 14:09:26.270340] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.555 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.814 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.814 "name": "Existed_Raid", 00:26:37.814 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:37.814 "strip_size_kb": 64, 00:26:37.814 "state": "configuring", 00:26:37.814 "raid_level": "concat", 00:26:37.814 "superblock": true, 00:26:37.814 "num_base_bdevs": 4, 00:26:37.814 "num_base_bdevs_discovered": 2, 00:26:37.814 "num_base_bdevs_operational": 4, 00:26:37.814 "base_bdevs_list": [ 00:26:37.814 { 00:26:37.814 "name": null, 00:26:37.814 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:37.814 "is_configured": false, 00:26:37.814 "data_offset": 2048, 00:26:37.814 "data_size": 63488 00:26:37.814 }, 00:26:37.814 { 00:26:37.814 "name": null, 00:26:37.814 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:37.814 "is_configured": false, 00:26:37.814 "data_offset": 2048, 00:26:37.814 "data_size": 63488 00:26:37.814 }, 00:26:37.814 { 00:26:37.814 "name": "BaseBdev3", 00:26:37.814 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:37.814 "is_configured": true, 00:26:37.814 "data_offset": 2048, 00:26:37.814 "data_size": 63488 00:26:37.814 }, 00:26:37.814 { 00:26:37.814 "name": "BaseBdev4", 00:26:37.814 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:37.814 "is_configured": true, 00:26:37.814 "data_offset": 2048, 00:26:37.814 "data_size": 63488 00:26:37.814 } 00:26:37.814 ] 00:26:37.814 }' 00:26:37.814 14:09:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.814 14:09:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:38.380 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.380 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:38.638 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:38.638 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:38.896 [2024-07-25 14:09:27.874717] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.896 14:09:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:39.154 14:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.154 "name": "Existed_Raid", 00:26:39.154 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:39.154 "strip_size_kb": 64, 00:26:39.154 "state": "configuring", 00:26:39.154 "raid_level": "concat", 00:26:39.154 "superblock": true, 00:26:39.154 "num_base_bdevs": 4, 00:26:39.154 "num_base_bdevs_discovered": 3, 00:26:39.154 "num_base_bdevs_operational": 4, 00:26:39.154 "base_bdevs_list": [ 00:26:39.154 { 00:26:39.154 "name": null, 00:26:39.154 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:39.154 "is_configured": false, 00:26:39.154 "data_offset": 2048, 00:26:39.154 "data_size": 63488 00:26:39.154 }, 00:26:39.154 { 00:26:39.154 "name": "BaseBdev2", 00:26:39.154 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:39.154 "is_configured": true, 00:26:39.154 "data_offset": 2048, 00:26:39.154 "data_size": 63488 00:26:39.154 }, 00:26:39.154 { 00:26:39.154 "name": "BaseBdev3", 00:26:39.154 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:39.154 "is_configured": true, 00:26:39.154 "data_offset": 2048, 00:26:39.154 "data_size": 63488 00:26:39.154 }, 00:26:39.154 { 00:26:39.154 "name": "BaseBdev4", 00:26:39.154 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:39.154 "is_configured": true, 00:26:39.154 "data_offset": 2048, 00:26:39.154 "data_size": 63488 00:26:39.154 } 00:26:39.154 ] 00:26:39.154 }' 00:26:39.154 14:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.154 14:09:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.111 14:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.111 14:09:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:40.111 14:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:40.111 14:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.111 14:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:40.370 14:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 223b9b9c-0eea-4adf-bb1f-52de20811e46 00:26:40.628 [2024-07-25 14:09:29.590859] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:40.628 [2024-07-25 14:09:29.591502] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:26:40.628 [2024-07-25 14:09:29.591715] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:40.628 [2024-07-25 14:09:29.592039] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:40.628 NewBaseBdev 00:26:40.628 [2024-07-25 14:09:29.592605] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:26:40.628 [2024-07-25 14:09:29.592623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:26:40.628 [2024-07-25 14:09:29.592989] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:40.628 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:40.886 14:09:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:41.144 [ 00:26:41.144 { 00:26:41.144 "name": "NewBaseBdev", 00:26:41.144 "aliases": [ 00:26:41.144 "223b9b9c-0eea-4adf-bb1f-52de20811e46" 00:26:41.144 ], 00:26:41.144 "product_name": "Malloc disk", 00:26:41.144 "block_size": 512, 00:26:41.144 "num_blocks": 65536, 00:26:41.144 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:41.144 "assigned_rate_limits": { 00:26:41.144 "rw_ios_per_sec": 0, 00:26:41.144 "rw_mbytes_per_sec": 0, 00:26:41.144 "r_mbytes_per_sec": 0, 00:26:41.144 "w_mbytes_per_sec": 0 00:26:41.144 }, 00:26:41.144 "claimed": true, 00:26:41.144 "claim_type": "exclusive_write", 00:26:41.144 "zoned": false, 00:26:41.144 "supported_io_types": { 00:26:41.144 "read": true, 00:26:41.144 "write": true, 00:26:41.144 "unmap": true, 00:26:41.144 "flush": true, 00:26:41.144 "reset": true, 00:26:41.144 "nvme_admin": false, 00:26:41.144 "nvme_io": false, 00:26:41.144 "nvme_io_md": false, 00:26:41.144 "write_zeroes": true, 00:26:41.144 "zcopy": true, 00:26:41.144 "get_zone_info": false, 00:26:41.144 "zone_management": false, 00:26:41.144 "zone_append": false, 00:26:41.144 "compare": false, 00:26:41.144 "compare_and_write": false, 00:26:41.144 "abort": true, 00:26:41.144 "seek_hole": false, 00:26:41.144 "seek_data": false, 00:26:41.144 "copy": true, 00:26:41.144 "nvme_iov_md": false 00:26:41.144 }, 00:26:41.144 "memory_domains": [ 00:26:41.144 { 00:26:41.144 "dma_device_id": "system", 00:26:41.144 "dma_device_type": 1 00:26:41.144 }, 00:26:41.144 { 00:26:41.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.144 "dma_device_type": 2 00:26:41.144 } 00:26:41.144 ], 00:26:41.144 "driver_specific": {} 00:26:41.144 } 00:26:41.144 ] 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.402 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.403 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.661 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.661 "name": "Existed_Raid", 00:26:41.661 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:41.661 "strip_size_kb": 64, 00:26:41.661 "state": "online", 00:26:41.661 "raid_level": "concat", 00:26:41.661 "superblock": true, 00:26:41.661 "num_base_bdevs": 4, 00:26:41.661 "num_base_bdevs_discovered": 4, 00:26:41.661 "num_base_bdevs_operational": 4, 00:26:41.661 "base_bdevs_list": [ 00:26:41.661 { 00:26:41.661 "name": "NewBaseBdev", 00:26:41.661 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:41.661 "is_configured": true, 00:26:41.661 "data_offset": 2048, 00:26:41.661 "data_size": 63488 00:26:41.661 }, 00:26:41.661 { 00:26:41.661 "name": "BaseBdev2", 00:26:41.661 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:41.661 "is_configured": true, 00:26:41.661 "data_offset": 2048, 00:26:41.661 "data_size": 63488 00:26:41.661 }, 00:26:41.661 { 00:26:41.661 "name": "BaseBdev3", 00:26:41.661 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:41.661 "is_configured": true, 00:26:41.661 "data_offset": 2048, 00:26:41.662 "data_size": 63488 00:26:41.662 }, 00:26:41.662 { 00:26:41.662 "name": "BaseBdev4", 00:26:41.662 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:41.662 "is_configured": true, 00:26:41.662 "data_offset": 2048, 00:26:41.662 "data_size": 63488 00:26:41.662 } 00:26:41.662 ] 00:26:41.662 }' 00:26:41.662 14:09:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.662 14:09:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:42.227 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:42.485 [2024-07-25 14:09:31.409004] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:42.485 "name": "Existed_Raid", 00:26:42.485 "aliases": [ 00:26:42.485 "3089ad68-2cd5-4ca5-8720-0a6fff2728a9" 00:26:42.485 ], 00:26:42.485 "product_name": "Raid Volume", 00:26:42.485 "block_size": 512, 00:26:42.485 "num_blocks": 253952, 00:26:42.485 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:42.485 "assigned_rate_limits": { 00:26:42.485 "rw_ios_per_sec": 0, 00:26:42.485 "rw_mbytes_per_sec": 0, 00:26:42.485 "r_mbytes_per_sec": 0, 00:26:42.485 "w_mbytes_per_sec": 0 00:26:42.485 }, 00:26:42.485 "claimed": false, 00:26:42.485 "zoned": false, 00:26:42.485 "supported_io_types": { 00:26:42.485 "read": true, 00:26:42.485 "write": true, 00:26:42.485 "unmap": true, 00:26:42.485 "flush": true, 00:26:42.485 "reset": true, 00:26:42.485 "nvme_admin": false, 00:26:42.485 "nvme_io": false, 00:26:42.485 "nvme_io_md": false, 00:26:42.485 "write_zeroes": true, 00:26:42.485 "zcopy": false, 00:26:42.485 "get_zone_info": false, 00:26:42.485 "zone_management": false, 00:26:42.485 "zone_append": false, 00:26:42.485 "compare": false, 00:26:42.485 "compare_and_write": false, 00:26:42.485 "abort": false, 00:26:42.485 "seek_hole": false, 00:26:42.485 "seek_data": false, 00:26:42.485 "copy": false, 00:26:42.485 "nvme_iov_md": false 00:26:42.485 }, 00:26:42.485 "memory_domains": [ 00:26:42.485 { 00:26:42.485 "dma_device_id": "system", 00:26:42.485 "dma_device_type": 1 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.485 "dma_device_type": 2 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "system", 00:26:42.485 "dma_device_type": 1 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.485 "dma_device_type": 2 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "system", 00:26:42.485 "dma_device_type": 1 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.485 "dma_device_type": 2 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "system", 00:26:42.485 "dma_device_type": 1 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.485 "dma_device_type": 2 00:26:42.485 } 00:26:42.485 ], 00:26:42.485 "driver_specific": { 00:26:42.485 "raid": { 00:26:42.485 "uuid": "3089ad68-2cd5-4ca5-8720-0a6fff2728a9", 00:26:42.485 "strip_size_kb": 64, 00:26:42.485 "state": "online", 00:26:42.485 "raid_level": "concat", 00:26:42.485 "superblock": true, 00:26:42.485 "num_base_bdevs": 4, 00:26:42.485 "num_base_bdevs_discovered": 4, 00:26:42.485 "num_base_bdevs_operational": 4, 00:26:42.485 "base_bdevs_list": [ 00:26:42.485 { 00:26:42.485 "name": "NewBaseBdev", 00:26:42.485 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:42.485 "is_configured": true, 00:26:42.485 "data_offset": 2048, 00:26:42.485 "data_size": 63488 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "name": "BaseBdev2", 00:26:42.485 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:42.485 "is_configured": true, 00:26:42.485 "data_offset": 2048, 00:26:42.485 "data_size": 63488 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "name": "BaseBdev3", 00:26:42.485 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:42.485 "is_configured": true, 00:26:42.485 "data_offset": 2048, 00:26:42.485 "data_size": 63488 00:26:42.485 }, 00:26:42.485 { 00:26:42.485 "name": "BaseBdev4", 00:26:42.485 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:42.485 "is_configured": true, 00:26:42.485 "data_offset": 2048, 00:26:42.485 "data_size": 63488 00:26:42.485 } 00:26:42.485 ] 00:26:42.485 } 00:26:42.485 } 00:26:42.485 }' 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:42.485 BaseBdev2 00:26:42.485 BaseBdev3 00:26:42.485 BaseBdev4' 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.485 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.051 "name": "NewBaseBdev", 00:26:43.051 "aliases": [ 00:26:43.051 "223b9b9c-0eea-4adf-bb1f-52de20811e46" 00:26:43.051 ], 00:26:43.051 "product_name": "Malloc disk", 00:26:43.051 "block_size": 512, 00:26:43.051 "num_blocks": 65536, 00:26:43.051 "uuid": "223b9b9c-0eea-4adf-bb1f-52de20811e46", 00:26:43.051 "assigned_rate_limits": { 00:26:43.051 "rw_ios_per_sec": 0, 00:26:43.051 "rw_mbytes_per_sec": 0, 00:26:43.051 "r_mbytes_per_sec": 0, 00:26:43.051 "w_mbytes_per_sec": 0 00:26:43.051 }, 00:26:43.051 "claimed": true, 00:26:43.051 "claim_type": "exclusive_write", 00:26:43.051 "zoned": false, 00:26:43.051 "supported_io_types": { 00:26:43.051 "read": true, 00:26:43.051 "write": true, 00:26:43.051 "unmap": true, 00:26:43.051 "flush": true, 00:26:43.051 "reset": true, 00:26:43.051 "nvme_admin": false, 00:26:43.051 "nvme_io": false, 00:26:43.051 "nvme_io_md": false, 00:26:43.051 "write_zeroes": true, 00:26:43.051 "zcopy": true, 00:26:43.051 "get_zone_info": false, 00:26:43.051 "zone_management": false, 00:26:43.051 "zone_append": false, 00:26:43.051 "compare": false, 00:26:43.051 "compare_and_write": false, 00:26:43.051 "abort": true, 00:26:43.051 "seek_hole": false, 00:26:43.051 "seek_data": false, 00:26:43.051 "copy": true, 00:26:43.051 "nvme_iov_md": false 00:26:43.051 }, 00:26:43.051 "memory_domains": [ 00:26:43.051 { 00:26:43.051 "dma_device_id": "system", 00:26:43.051 "dma_device_type": 1 00:26:43.051 }, 00:26:43.051 { 00:26:43.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.051 "dma_device_type": 2 00:26:43.051 } 00:26:43.051 ], 00:26:43.051 "driver_specific": {} 00:26:43.051 }' 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.051 14:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.051 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.051 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:43.309 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:43.566 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.566 "name": "BaseBdev2", 00:26:43.566 "aliases": [ 00:26:43.566 "401d6988-ca35-4bc1-a766-66b030f1bef0" 00:26:43.566 ], 00:26:43.566 "product_name": "Malloc disk", 00:26:43.566 "block_size": 512, 00:26:43.566 "num_blocks": 65536, 00:26:43.566 "uuid": "401d6988-ca35-4bc1-a766-66b030f1bef0", 00:26:43.566 "assigned_rate_limits": { 00:26:43.566 "rw_ios_per_sec": 0, 00:26:43.566 "rw_mbytes_per_sec": 0, 00:26:43.566 "r_mbytes_per_sec": 0, 00:26:43.566 "w_mbytes_per_sec": 0 00:26:43.566 }, 00:26:43.566 "claimed": true, 00:26:43.566 "claim_type": "exclusive_write", 00:26:43.566 "zoned": false, 00:26:43.566 "supported_io_types": { 00:26:43.566 "read": true, 00:26:43.566 "write": true, 00:26:43.566 "unmap": true, 00:26:43.566 "flush": true, 00:26:43.566 "reset": true, 00:26:43.566 "nvme_admin": false, 00:26:43.566 "nvme_io": false, 00:26:43.566 "nvme_io_md": false, 00:26:43.566 "write_zeroes": true, 00:26:43.566 "zcopy": true, 00:26:43.566 "get_zone_info": false, 00:26:43.566 "zone_management": false, 00:26:43.566 "zone_append": false, 00:26:43.566 "compare": false, 00:26:43.566 "compare_and_write": false, 00:26:43.566 "abort": true, 00:26:43.566 "seek_hole": false, 00:26:43.566 "seek_data": false, 00:26:43.566 "copy": true, 00:26:43.566 "nvme_iov_md": false 00:26:43.566 }, 00:26:43.566 "memory_domains": [ 00:26:43.566 { 00:26:43.566 "dma_device_id": "system", 00:26:43.566 "dma_device_type": 1 00:26:43.566 }, 00:26:43.566 { 00:26:43.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.566 "dma_device_type": 2 00:26:43.566 } 00:26:43.566 ], 00:26:43.566 "driver_specific": {} 00:26:43.566 }' 00:26:43.566 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.566 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.566 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:43.566 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.824 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:44.081 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:44.081 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:44.081 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:44.081 14:09:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:44.340 "name": "BaseBdev3", 00:26:44.340 "aliases": [ 00:26:44.340 "9b631e1d-90de-4ef1-972a-5fd2cb102bc7" 00:26:44.340 ], 00:26:44.340 "product_name": "Malloc disk", 00:26:44.340 "block_size": 512, 00:26:44.340 "num_blocks": 65536, 00:26:44.340 "uuid": "9b631e1d-90de-4ef1-972a-5fd2cb102bc7", 00:26:44.340 "assigned_rate_limits": { 00:26:44.340 "rw_ios_per_sec": 0, 00:26:44.340 "rw_mbytes_per_sec": 0, 00:26:44.340 "r_mbytes_per_sec": 0, 00:26:44.340 "w_mbytes_per_sec": 0 00:26:44.340 }, 00:26:44.340 "claimed": true, 00:26:44.340 "claim_type": "exclusive_write", 00:26:44.340 "zoned": false, 00:26:44.340 "supported_io_types": { 00:26:44.340 "read": true, 00:26:44.340 "write": true, 00:26:44.340 "unmap": true, 00:26:44.340 "flush": true, 00:26:44.340 "reset": true, 00:26:44.340 "nvme_admin": false, 00:26:44.340 "nvme_io": false, 00:26:44.340 "nvme_io_md": false, 00:26:44.340 "write_zeroes": true, 00:26:44.340 "zcopy": true, 00:26:44.340 "get_zone_info": false, 00:26:44.340 "zone_management": false, 00:26:44.340 "zone_append": false, 00:26:44.340 "compare": false, 00:26:44.340 "compare_and_write": false, 00:26:44.340 "abort": true, 00:26:44.340 "seek_hole": false, 00:26:44.340 "seek_data": false, 00:26:44.340 "copy": true, 00:26:44.340 "nvme_iov_md": false 00:26:44.340 }, 00:26:44.340 "memory_domains": [ 00:26:44.340 { 00:26:44.340 "dma_device_id": "system", 00:26:44.340 "dma_device_type": 1 00:26:44.340 }, 00:26:44.340 { 00:26:44.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.340 "dma_device_type": 2 00:26:44.340 } 00:26:44.340 ], 00:26:44.340 "driver_specific": {} 00:26:44.340 }' 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:44.340 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:44.598 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:44.856 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:44.856 "name": "BaseBdev4", 00:26:44.856 "aliases": [ 00:26:44.856 "906ee607-d527-4e9e-8560-a8d6e0ca78f6" 00:26:44.856 ], 00:26:44.856 "product_name": "Malloc disk", 00:26:44.856 "block_size": 512, 00:26:44.856 "num_blocks": 65536, 00:26:44.856 "uuid": "906ee607-d527-4e9e-8560-a8d6e0ca78f6", 00:26:44.856 "assigned_rate_limits": { 00:26:44.856 "rw_ios_per_sec": 0, 00:26:44.856 "rw_mbytes_per_sec": 0, 00:26:44.856 "r_mbytes_per_sec": 0, 00:26:44.856 "w_mbytes_per_sec": 0 00:26:44.856 }, 00:26:44.856 "claimed": true, 00:26:44.856 "claim_type": "exclusive_write", 00:26:44.856 "zoned": false, 00:26:44.856 "supported_io_types": { 00:26:44.856 "read": true, 00:26:44.856 "write": true, 00:26:44.856 "unmap": true, 00:26:44.856 "flush": true, 00:26:44.856 "reset": true, 00:26:44.856 "nvme_admin": false, 00:26:44.856 "nvme_io": false, 00:26:44.856 "nvme_io_md": false, 00:26:44.856 "write_zeroes": true, 00:26:44.856 "zcopy": true, 00:26:44.856 "get_zone_info": false, 00:26:44.856 "zone_management": false, 00:26:44.856 "zone_append": false, 00:26:44.856 "compare": false, 00:26:44.856 "compare_and_write": false, 00:26:44.856 "abort": true, 00:26:44.856 "seek_hole": false, 00:26:44.856 "seek_data": false, 00:26:44.856 "copy": true, 00:26:44.856 "nvme_iov_md": false 00:26:44.856 }, 00:26:44.856 "memory_domains": [ 00:26:44.856 { 00:26:44.856 "dma_device_id": "system", 00:26:44.856 "dma_device_type": 1 00:26:44.856 }, 00:26:44.856 { 00:26:44.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.856 "dma_device_type": 2 00:26:44.856 } 00:26:44.856 ], 00:26:44.856 "driver_specific": {} 00:26:44.856 }' 00:26:44.856 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:44.856 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.113 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:45.113 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.113 14:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.113 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:45.113 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.113 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.113 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:45.113 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.371 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.371 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:45.371 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:45.629 [2024-07-25 14:09:34.521710] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:45.629 [2024-07-25 14:09:34.521972] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:45.629 [2024-07-25 14:09:34.522173] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.629 [2024-07-25 14:09:34.522355] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.629 [2024-07-25 14:09:34.522483] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 138586 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 138586 ']' 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 138586 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138586 00:26:45.629 killing process with pid 138586 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138586' 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 138586 00:26:45.629 [2024-07-25 14:09:34.569098] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:45.629 14:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 138586 00:26:45.888 [2024-07-25 14:09:34.907617] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:47.282 ************************************ 00:26:47.282 END TEST raid_state_function_test_sb 00:26:47.282 ************************************ 00:26:47.282 14:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:47.282 00:26:47.282 real 0m37.460s 00:26:47.282 user 1m9.463s 00:26:47.282 sys 0m4.430s 00:26:47.282 14:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.282 14:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.282 14:09:36 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:26:47.282 14:09:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:47.282 14:09:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.282 14:09:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:47.282 ************************************ 00:26:47.282 START TEST raid_superblock_test 00:26:47.282 ************************************ 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=139721 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 139721 /var/tmp/spdk-raid.sock 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 139721 ']' 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:47.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.282 14:09:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.282 [2024-07-25 14:09:36.218924] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:26:47.282 [2024-07-25 14:09:36.219450] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139721 ] 00:26:47.542 [2024-07-25 14:09:36.396776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.800 [2024-07-25 14:09:36.654588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.058 [2024-07-25 14:09:36.866347] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:48.317 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:48.575 malloc1 00:26:48.575 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:48.834 [2024-07-25 14:09:37.730530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:48.834 [2024-07-25 14:09:37.730941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:48.834 [2024-07-25 14:09:37.731113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:48.834 [2024-07-25 14:09:37.731252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:48.834 [2024-07-25 14:09:37.734067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:48.834 [2024-07-25 14:09:37.734268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:48.834 pt1 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:48.834 14:09:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:49.092 malloc2 00:26:49.092 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:49.349 [2024-07-25 14:09:38.354045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:49.349 [2024-07-25 14:09:38.354417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.349 [2024-07-25 14:09:38.354591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:49.349 [2024-07-25 14:09:38.354718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.349 [2024-07-25 14:09:38.357329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.349 [2024-07-25 14:09:38.357504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:49.349 pt2 00:26:49.349 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:49.349 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:49.349 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:49.350 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:49.915 malloc3 00:26:49.915 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:50.173 [2024-07-25 14:09:38.963620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:50.173 [2024-07-25 14:09:38.964084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.173 [2024-07-25 14:09:38.964258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:50.173 [2024-07-25 14:09:38.964424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.173 [2024-07-25 14:09:38.967173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.173 [2024-07-25 14:09:38.967352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:50.173 pt3 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:50.173 14:09:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:50.431 malloc4 00:26:50.431 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:50.689 [2024-07-25 14:09:39.577652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:50.689 [2024-07-25 14:09:39.578145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.689 [2024-07-25 14:09:39.578337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:50.689 [2024-07-25 14:09:39.578527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.689 [2024-07-25 14:09:39.581169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.689 [2024-07-25 14:09:39.581374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:50.689 pt4 00:26:50.689 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:50.689 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:50.689 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:50.946 [2024-07-25 14:09:39.869850] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:50.946 [2024-07-25 14:09:39.872393] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:50.946 [2024-07-25 14:09:39.872653] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:50.946 [2024-07-25 14:09:39.872874] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:50.946 [2024-07-25 14:09:39.873220] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:26:50.946 [2024-07-25 14:09:39.873409] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:26:50.946 [2024-07-25 14:09:39.873616] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:50.946 [2024-07-25 14:09:39.874148] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:26:50.946 [2024-07-25 14:09:39.874305] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:26:50.946 [2024-07-25 14:09:39.874649] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.946 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.947 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.947 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.947 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.947 14:09:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.204 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.204 "name": "raid_bdev1", 00:26:51.204 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:26:51.204 "strip_size_kb": 64, 00:26:51.204 "state": "online", 00:26:51.204 "raid_level": "concat", 00:26:51.204 "superblock": true, 00:26:51.204 "num_base_bdevs": 4, 00:26:51.204 "num_base_bdevs_discovered": 4, 00:26:51.204 "num_base_bdevs_operational": 4, 00:26:51.204 "base_bdevs_list": [ 00:26:51.204 { 00:26:51.204 "name": "pt1", 00:26:51.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:51.204 "is_configured": true, 00:26:51.204 "data_offset": 2048, 00:26:51.204 "data_size": 63488 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "pt2", 00:26:51.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:51.204 "is_configured": true, 00:26:51.204 "data_offset": 2048, 00:26:51.204 "data_size": 63488 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "pt3", 00:26:51.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:51.204 "is_configured": true, 00:26:51.204 "data_offset": 2048, 00:26:51.204 "data_size": 63488 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "pt4", 00:26:51.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:51.204 "is_configured": true, 00:26:51.205 "data_offset": 2048, 00:26:51.205 "data_size": 63488 00:26:51.205 } 00:26:51.205 ] 00:26:51.205 }' 00:26:51.205 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.205 14:09:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.138 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:52.138 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:52.138 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:52.138 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:52.138 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:52.139 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:52.139 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:52.139 14:09:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:52.396 [2024-07-25 14:09:41.235319] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.396 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:52.396 "name": "raid_bdev1", 00:26:52.396 "aliases": [ 00:26:52.396 "778cb070-f243-461c-9634-186482f63dc6" 00:26:52.396 ], 00:26:52.396 "product_name": "Raid Volume", 00:26:52.396 "block_size": 512, 00:26:52.396 "num_blocks": 253952, 00:26:52.396 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:26:52.396 "assigned_rate_limits": { 00:26:52.396 "rw_ios_per_sec": 0, 00:26:52.396 "rw_mbytes_per_sec": 0, 00:26:52.396 "r_mbytes_per_sec": 0, 00:26:52.396 "w_mbytes_per_sec": 0 00:26:52.396 }, 00:26:52.396 "claimed": false, 00:26:52.396 "zoned": false, 00:26:52.396 "supported_io_types": { 00:26:52.396 "read": true, 00:26:52.396 "write": true, 00:26:52.396 "unmap": true, 00:26:52.396 "flush": true, 00:26:52.396 "reset": true, 00:26:52.396 "nvme_admin": false, 00:26:52.396 "nvme_io": false, 00:26:52.396 "nvme_io_md": false, 00:26:52.396 "write_zeroes": true, 00:26:52.396 "zcopy": false, 00:26:52.396 "get_zone_info": false, 00:26:52.396 "zone_management": false, 00:26:52.396 "zone_append": false, 00:26:52.396 "compare": false, 00:26:52.396 "compare_and_write": false, 00:26:52.396 "abort": false, 00:26:52.396 "seek_hole": false, 00:26:52.396 "seek_data": false, 00:26:52.396 "copy": false, 00:26:52.396 "nvme_iov_md": false 00:26:52.396 }, 00:26:52.396 "memory_domains": [ 00:26:52.396 { 00:26:52.396 "dma_device_id": "system", 00:26:52.396 "dma_device_type": 1 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.396 "dma_device_type": 2 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "system", 00:26:52.396 "dma_device_type": 1 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.396 "dma_device_type": 2 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "system", 00:26:52.396 "dma_device_type": 1 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.396 "dma_device_type": 2 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "system", 00:26:52.396 "dma_device_type": 1 00:26:52.396 }, 00:26:52.396 { 00:26:52.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.396 "dma_device_type": 2 00:26:52.396 } 00:26:52.396 ], 00:26:52.397 "driver_specific": { 00:26:52.397 "raid": { 00:26:52.397 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:26:52.397 "strip_size_kb": 64, 00:26:52.397 "state": "online", 00:26:52.397 "raid_level": "concat", 00:26:52.397 "superblock": true, 00:26:52.397 "num_base_bdevs": 4, 00:26:52.397 "num_base_bdevs_discovered": 4, 00:26:52.397 "num_base_bdevs_operational": 4, 00:26:52.397 "base_bdevs_list": [ 00:26:52.397 { 00:26:52.397 "name": "pt1", 00:26:52.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:52.397 "is_configured": true, 00:26:52.397 "data_offset": 2048, 00:26:52.397 "data_size": 63488 00:26:52.397 }, 00:26:52.397 { 00:26:52.397 "name": "pt2", 00:26:52.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:52.397 "is_configured": true, 00:26:52.397 "data_offset": 2048, 00:26:52.397 "data_size": 63488 00:26:52.397 }, 00:26:52.397 { 00:26:52.397 "name": "pt3", 00:26:52.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:52.397 "is_configured": true, 00:26:52.397 "data_offset": 2048, 00:26:52.397 "data_size": 63488 00:26:52.397 }, 00:26:52.397 { 00:26:52.397 "name": "pt4", 00:26:52.397 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:52.397 "is_configured": true, 00:26:52.397 "data_offset": 2048, 00:26:52.397 "data_size": 63488 00:26:52.397 } 00:26:52.397 ] 00:26:52.397 } 00:26:52.397 } 00:26:52.397 }' 00:26:52.397 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:52.397 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:52.397 pt2 00:26:52.397 pt3 00:26:52.397 pt4' 00:26:52.397 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:52.397 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:52.397 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:52.655 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:52.655 "name": "pt1", 00:26:52.655 "aliases": [ 00:26:52.655 "00000000-0000-0000-0000-000000000001" 00:26:52.655 ], 00:26:52.655 "product_name": "passthru", 00:26:52.655 "block_size": 512, 00:26:52.655 "num_blocks": 65536, 00:26:52.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:52.655 "assigned_rate_limits": { 00:26:52.655 "rw_ios_per_sec": 0, 00:26:52.655 "rw_mbytes_per_sec": 0, 00:26:52.655 "r_mbytes_per_sec": 0, 00:26:52.655 "w_mbytes_per_sec": 0 00:26:52.655 }, 00:26:52.655 "claimed": true, 00:26:52.655 "claim_type": "exclusive_write", 00:26:52.655 "zoned": false, 00:26:52.655 "supported_io_types": { 00:26:52.655 "read": true, 00:26:52.655 "write": true, 00:26:52.655 "unmap": true, 00:26:52.655 "flush": true, 00:26:52.655 "reset": true, 00:26:52.655 "nvme_admin": false, 00:26:52.655 "nvme_io": false, 00:26:52.655 "nvme_io_md": false, 00:26:52.655 "write_zeroes": true, 00:26:52.655 "zcopy": true, 00:26:52.655 "get_zone_info": false, 00:26:52.655 "zone_management": false, 00:26:52.655 "zone_append": false, 00:26:52.655 "compare": false, 00:26:52.655 "compare_and_write": false, 00:26:52.655 "abort": true, 00:26:52.655 "seek_hole": false, 00:26:52.655 "seek_data": false, 00:26:52.655 "copy": true, 00:26:52.655 "nvme_iov_md": false 00:26:52.655 }, 00:26:52.655 "memory_domains": [ 00:26:52.655 { 00:26:52.655 "dma_device_id": "system", 00:26:52.655 "dma_device_type": 1 00:26:52.655 }, 00:26:52.655 { 00:26:52.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.655 "dma_device_type": 2 00:26:52.655 } 00:26:52.655 ], 00:26:52.655 "driver_specific": { 00:26:52.655 "passthru": { 00:26:52.655 "name": "pt1", 00:26:52.655 "base_bdev_name": "malloc1" 00:26:52.655 } 00:26:52.655 } 00:26:52.655 }' 00:26:52.655 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:52.655 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:52.655 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:52.655 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:52.913 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:52.913 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:52.913 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:52.913 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:52.913 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:52.914 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:52.914 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:53.171 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:53.171 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:53.171 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:53.171 14:09:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:53.428 "name": "pt2", 00:26:53.428 "aliases": [ 00:26:53.428 "00000000-0000-0000-0000-000000000002" 00:26:53.428 ], 00:26:53.428 "product_name": "passthru", 00:26:53.428 "block_size": 512, 00:26:53.428 "num_blocks": 65536, 00:26:53.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:53.428 "assigned_rate_limits": { 00:26:53.428 "rw_ios_per_sec": 0, 00:26:53.428 "rw_mbytes_per_sec": 0, 00:26:53.428 "r_mbytes_per_sec": 0, 00:26:53.428 "w_mbytes_per_sec": 0 00:26:53.428 }, 00:26:53.428 "claimed": true, 00:26:53.428 "claim_type": "exclusive_write", 00:26:53.428 "zoned": false, 00:26:53.428 "supported_io_types": { 00:26:53.428 "read": true, 00:26:53.428 "write": true, 00:26:53.428 "unmap": true, 00:26:53.428 "flush": true, 00:26:53.428 "reset": true, 00:26:53.428 "nvme_admin": false, 00:26:53.428 "nvme_io": false, 00:26:53.428 "nvme_io_md": false, 00:26:53.428 "write_zeroes": true, 00:26:53.428 "zcopy": true, 00:26:53.428 "get_zone_info": false, 00:26:53.428 "zone_management": false, 00:26:53.428 "zone_append": false, 00:26:53.428 "compare": false, 00:26:53.428 "compare_and_write": false, 00:26:53.428 "abort": true, 00:26:53.428 "seek_hole": false, 00:26:53.428 "seek_data": false, 00:26:53.428 "copy": true, 00:26:53.428 "nvme_iov_md": false 00:26:53.428 }, 00:26:53.428 "memory_domains": [ 00:26:53.428 { 00:26:53.428 "dma_device_id": "system", 00:26:53.428 "dma_device_type": 1 00:26:53.428 }, 00:26:53.428 { 00:26:53.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.428 "dma_device_type": 2 00:26:53.428 } 00:26:53.428 ], 00:26:53.428 "driver_specific": { 00:26:53.428 "passthru": { 00:26:53.428 "name": "pt2", 00:26:53.428 "base_bdev_name": "malloc2" 00:26:53.428 } 00:26:53.428 } 00:26:53.428 }' 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:53.428 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:53.686 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:53.943 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:53.943 "name": "pt3", 00:26:53.943 "aliases": [ 00:26:53.943 "00000000-0000-0000-0000-000000000003" 00:26:53.943 ], 00:26:53.943 "product_name": "passthru", 00:26:53.943 "block_size": 512, 00:26:53.943 "num_blocks": 65536, 00:26:53.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:53.943 "assigned_rate_limits": { 00:26:53.943 "rw_ios_per_sec": 0, 00:26:53.943 "rw_mbytes_per_sec": 0, 00:26:53.943 "r_mbytes_per_sec": 0, 00:26:53.943 "w_mbytes_per_sec": 0 00:26:53.943 }, 00:26:53.943 "claimed": true, 00:26:53.943 "claim_type": "exclusive_write", 00:26:53.943 "zoned": false, 00:26:53.943 "supported_io_types": { 00:26:53.943 "read": true, 00:26:53.943 "write": true, 00:26:53.943 "unmap": true, 00:26:53.943 "flush": true, 00:26:53.943 "reset": true, 00:26:53.943 "nvme_admin": false, 00:26:53.943 "nvme_io": false, 00:26:53.943 "nvme_io_md": false, 00:26:53.943 "write_zeroes": true, 00:26:53.943 "zcopy": true, 00:26:53.943 "get_zone_info": false, 00:26:53.943 "zone_management": false, 00:26:53.943 "zone_append": false, 00:26:53.943 "compare": false, 00:26:53.943 "compare_and_write": false, 00:26:53.943 "abort": true, 00:26:53.943 "seek_hole": false, 00:26:53.943 "seek_data": false, 00:26:53.943 "copy": true, 00:26:53.943 "nvme_iov_md": false 00:26:53.943 }, 00:26:53.943 "memory_domains": [ 00:26:53.943 { 00:26:53.943 "dma_device_id": "system", 00:26:53.943 "dma_device_type": 1 00:26:53.943 }, 00:26:53.943 { 00:26:53.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.943 "dma_device_type": 2 00:26:53.943 } 00:26:53.943 ], 00:26:53.943 "driver_specific": { 00:26:53.943 "passthru": { 00:26:53.943 "name": "pt3", 00:26:53.943 "base_bdev_name": "malloc3" 00:26:53.943 } 00:26:53.943 } 00:26:53.943 }' 00:26:53.943 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:53.943 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.204 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:54.204 14:09:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.204 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.470 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:54.470 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:54.470 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:54.470 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:54.728 "name": "pt4", 00:26:54.728 "aliases": [ 00:26:54.728 "00000000-0000-0000-0000-000000000004" 00:26:54.728 ], 00:26:54.728 "product_name": "passthru", 00:26:54.728 "block_size": 512, 00:26:54.728 "num_blocks": 65536, 00:26:54.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:54.728 "assigned_rate_limits": { 00:26:54.728 "rw_ios_per_sec": 0, 00:26:54.728 "rw_mbytes_per_sec": 0, 00:26:54.728 "r_mbytes_per_sec": 0, 00:26:54.728 "w_mbytes_per_sec": 0 00:26:54.728 }, 00:26:54.728 "claimed": true, 00:26:54.728 "claim_type": "exclusive_write", 00:26:54.728 "zoned": false, 00:26:54.728 "supported_io_types": { 00:26:54.728 "read": true, 00:26:54.728 "write": true, 00:26:54.728 "unmap": true, 00:26:54.728 "flush": true, 00:26:54.728 "reset": true, 00:26:54.728 "nvme_admin": false, 00:26:54.728 "nvme_io": false, 00:26:54.728 "nvme_io_md": false, 00:26:54.728 "write_zeroes": true, 00:26:54.728 "zcopy": true, 00:26:54.728 "get_zone_info": false, 00:26:54.728 "zone_management": false, 00:26:54.728 "zone_append": false, 00:26:54.728 "compare": false, 00:26:54.728 "compare_and_write": false, 00:26:54.728 "abort": true, 00:26:54.728 "seek_hole": false, 00:26:54.728 "seek_data": false, 00:26:54.728 "copy": true, 00:26:54.728 "nvme_iov_md": false 00:26:54.728 }, 00:26:54.728 "memory_domains": [ 00:26:54.728 { 00:26:54.728 "dma_device_id": "system", 00:26:54.728 "dma_device_type": 1 00:26:54.728 }, 00:26:54.728 { 00:26:54.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.728 "dma_device_type": 2 00:26:54.728 } 00:26:54.728 ], 00:26:54.728 "driver_specific": { 00:26:54.728 "passthru": { 00:26:54.728 "name": "pt4", 00:26:54.728 "base_bdev_name": "malloc4" 00:26:54.728 } 00:26:54.728 } 00:26:54.728 }' 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.728 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:54.986 14:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:55.244 [2024-07-25 14:09:44.256250] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.244 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=778cb070-f243-461c-9634-186482f63dc6 00:26:55.244 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 778cb070-f243-461c-9634-186482f63dc6 ']' 00:26:55.244 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:55.502 [2024-07-25 14:09:44.519880] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.502 [2024-07-25 14:09:44.520131] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.502 [2024-07-25 14:09:44.520337] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.502 [2024-07-25 14:09:44.520542] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.502 [2024-07-25 14:09:44.520660] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:26:55.502 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.502 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:55.760 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:55.760 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:55.760 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:55.760 14:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:56.326 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:56.326 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:56.326 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:56.584 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:56.843 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:56.843 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:57.101 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:57.101 14:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:57.360 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:57.619 [2024-07-25 14:09:46.428344] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:57.619 [2024-07-25 14:09:46.431016] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:57.619 [2024-07-25 14:09:46.431216] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:57.619 [2024-07-25 14:09:46.431307] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:57.619 [2024-07-25 14:09:46.431495] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:57.619 [2024-07-25 14:09:46.431725] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:57.619 [2024-07-25 14:09:46.431895] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:57.619 [2024-07-25 14:09:46.432083] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:57.619 [2024-07-25 14:09:46.432228] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:57.619 [2024-07-25 14:09:46.432332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:26:57.619 request: 00:26:57.619 { 00:26:57.619 "name": "raid_bdev1", 00:26:57.619 "raid_level": "concat", 00:26:57.619 "base_bdevs": [ 00:26:57.619 "malloc1", 00:26:57.619 "malloc2", 00:26:57.619 "malloc3", 00:26:57.619 "malloc4" 00:26:57.619 ], 00:26:57.619 "strip_size_kb": 64, 00:26:57.619 "superblock": false, 00:26:57.619 "method": "bdev_raid_create", 00:26:57.619 "req_id": 1 00:26:57.619 } 00:26:57.619 Got JSON-RPC error response 00:26:57.619 response: 00:26:57.619 { 00:26:57.619 "code": -17, 00:26:57.619 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:57.619 } 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.619 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:57.877 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:57.877 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:57.877 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:58.152 [2024-07-25 14:09:46.948795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:58.152 [2024-07-25 14:09:46.949154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.152 [2024-07-25 14:09:46.949308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:58.152 [2024-07-25 14:09:46.949457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.152 [2024-07-25 14:09:46.952195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.152 [2024-07-25 14:09:46.952373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:58.152 [2024-07-25 14:09:46.952658] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:58.152 [2024-07-25 14:09:46.952836] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:58.152 pt1 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:58.152 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.153 14:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.412 14:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:58.412 "name": "raid_bdev1", 00:26:58.412 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:26:58.412 "strip_size_kb": 64, 00:26:58.412 "state": "configuring", 00:26:58.412 "raid_level": "concat", 00:26:58.412 "superblock": true, 00:26:58.412 "num_base_bdevs": 4, 00:26:58.412 "num_base_bdevs_discovered": 1, 00:26:58.412 "num_base_bdevs_operational": 4, 00:26:58.412 "base_bdevs_list": [ 00:26:58.412 { 00:26:58.412 "name": "pt1", 00:26:58.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:58.412 "is_configured": true, 00:26:58.412 "data_offset": 2048, 00:26:58.412 "data_size": 63488 00:26:58.412 }, 00:26:58.412 { 00:26:58.412 "name": null, 00:26:58.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:58.412 "is_configured": false, 00:26:58.412 "data_offset": 2048, 00:26:58.412 "data_size": 63488 00:26:58.412 }, 00:26:58.412 { 00:26:58.412 "name": null, 00:26:58.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:58.412 "is_configured": false, 00:26:58.412 "data_offset": 2048, 00:26:58.412 "data_size": 63488 00:26:58.412 }, 00:26:58.412 { 00:26:58.412 "name": null, 00:26:58.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:58.412 "is_configured": false, 00:26:58.412 "data_offset": 2048, 00:26:58.412 "data_size": 63488 00:26:58.412 } 00:26:58.412 ] 00:26:58.412 }' 00:26:58.412 14:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:58.412 14:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.980 14:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:26:58.980 14:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:59.237 [2024-07-25 14:09:48.166408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:59.238 [2024-07-25 14:09:48.166797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.238 [2024-07-25 14:09:48.167022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:59.238 [2024-07-25 14:09:48.167219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.238 [2024-07-25 14:09:48.168108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.238 [2024-07-25 14:09:48.168199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:59.238 [2024-07-25 14:09:48.168357] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:59.238 [2024-07-25 14:09:48.168424] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:59.238 pt2 00:26:59.238 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:59.496 [2024-07-25 14:09:48.446476] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.496 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.754 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.754 "name": "raid_bdev1", 00:26:59.754 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:26:59.754 "strip_size_kb": 64, 00:26:59.754 "state": "configuring", 00:26:59.754 "raid_level": "concat", 00:26:59.754 "superblock": true, 00:26:59.754 "num_base_bdevs": 4, 00:26:59.754 "num_base_bdevs_discovered": 1, 00:26:59.754 "num_base_bdevs_operational": 4, 00:26:59.754 "base_bdevs_list": [ 00:26:59.754 { 00:26:59.754 "name": "pt1", 00:26:59.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:59.754 "is_configured": true, 00:26:59.754 "data_offset": 2048, 00:26:59.754 "data_size": 63488 00:26:59.754 }, 00:26:59.754 { 00:26:59.754 "name": null, 00:26:59.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.754 "is_configured": false, 00:26:59.754 "data_offset": 2048, 00:26:59.754 "data_size": 63488 00:26:59.754 }, 00:26:59.754 { 00:26:59.754 "name": null, 00:26:59.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:59.754 "is_configured": false, 00:26:59.754 "data_offset": 2048, 00:26:59.754 "data_size": 63488 00:26:59.754 }, 00:26:59.754 { 00:26:59.754 "name": null, 00:26:59.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:59.754 "is_configured": false, 00:26:59.754 "data_offset": 2048, 00:26:59.754 "data_size": 63488 00:26:59.754 } 00:26:59.754 ] 00:26:59.754 }' 00:26:59.754 14:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.754 14:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:00.688 [2024-07-25 14:09:49.642821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:00.688 [2024-07-25 14:09:49.643230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.688 [2024-07-25 14:09:49.643437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:00.688 [2024-07-25 14:09:49.643629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.688 [2024-07-25 14:09:49.644296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.688 [2024-07-25 14:09:49.644487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:00.688 [2024-07-25 14:09:49.644736] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:00.688 [2024-07-25 14:09:49.644876] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:00.688 pt2 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:00.688 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:00.946 [2024-07-25 14:09:49.926993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:00.946 [2024-07-25 14:09:49.927284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.946 [2024-07-25 14:09:49.927458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:00.946 [2024-07-25 14:09:49.927639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.946 [2024-07-25 14:09:49.928322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.946 [2024-07-25 14:09:49.928553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:00.946 [2024-07-25 14:09:49.928779] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:00.946 [2024-07-25 14:09:49.928918] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:00.946 pt3 00:27:00.946 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:00.946 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:00.946 14:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:01.205 [2024-07-25 14:09:50.170952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:01.205 [2024-07-25 14:09:50.171306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.205 [2024-07-25 14:09:50.171476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:01.205 [2024-07-25 14:09:50.171630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.205 [2024-07-25 14:09:50.172276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.205 [2024-07-25 14:09:50.172447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:01.205 [2024-07-25 14:09:50.172670] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:01.205 [2024-07-25 14:09:50.172826] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:01.205 [2024-07-25 14:09:50.173098] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:27:01.205 [2024-07-25 14:09:50.173207] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:01.205 [2024-07-25 14:09:50.173346] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:01.205 [2024-07-25 14:09:50.173767] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:27:01.205 [2024-07-25 14:09:50.173919] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:27:01.205 [2024-07-25 14:09:50.174204] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.205 pt4 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.205 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.463 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.463 "name": "raid_bdev1", 00:27:01.463 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:27:01.463 "strip_size_kb": 64, 00:27:01.463 "state": "online", 00:27:01.463 "raid_level": "concat", 00:27:01.463 "superblock": true, 00:27:01.463 "num_base_bdevs": 4, 00:27:01.463 "num_base_bdevs_discovered": 4, 00:27:01.463 "num_base_bdevs_operational": 4, 00:27:01.463 "base_bdevs_list": [ 00:27:01.463 { 00:27:01.463 "name": "pt1", 00:27:01.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:01.463 "is_configured": true, 00:27:01.463 "data_offset": 2048, 00:27:01.463 "data_size": 63488 00:27:01.463 }, 00:27:01.463 { 00:27:01.463 "name": "pt2", 00:27:01.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.463 "is_configured": true, 00:27:01.463 "data_offset": 2048, 00:27:01.463 "data_size": 63488 00:27:01.463 }, 00:27:01.463 { 00:27:01.463 "name": "pt3", 00:27:01.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.463 "is_configured": true, 00:27:01.463 "data_offset": 2048, 00:27:01.463 "data_size": 63488 00:27:01.463 }, 00:27:01.463 { 00:27:01.463 "name": "pt4", 00:27:01.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:01.463 "is_configured": true, 00:27:01.463 "data_offset": 2048, 00:27:01.463 "data_size": 63488 00:27:01.463 } 00:27:01.463 ] 00:27:01.463 }' 00:27:01.463 14:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.463 14:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:02.399 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:02.658 [2024-07-25 14:09:51.519895] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:02.658 "name": "raid_bdev1", 00:27:02.658 "aliases": [ 00:27:02.658 "778cb070-f243-461c-9634-186482f63dc6" 00:27:02.658 ], 00:27:02.658 "product_name": "Raid Volume", 00:27:02.658 "block_size": 512, 00:27:02.658 "num_blocks": 253952, 00:27:02.658 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:27:02.658 "assigned_rate_limits": { 00:27:02.658 "rw_ios_per_sec": 0, 00:27:02.658 "rw_mbytes_per_sec": 0, 00:27:02.658 "r_mbytes_per_sec": 0, 00:27:02.658 "w_mbytes_per_sec": 0 00:27:02.658 }, 00:27:02.658 "claimed": false, 00:27:02.658 "zoned": false, 00:27:02.658 "supported_io_types": { 00:27:02.658 "read": true, 00:27:02.658 "write": true, 00:27:02.658 "unmap": true, 00:27:02.658 "flush": true, 00:27:02.658 "reset": true, 00:27:02.658 "nvme_admin": false, 00:27:02.658 "nvme_io": false, 00:27:02.658 "nvme_io_md": false, 00:27:02.658 "write_zeroes": true, 00:27:02.658 "zcopy": false, 00:27:02.658 "get_zone_info": false, 00:27:02.658 "zone_management": false, 00:27:02.658 "zone_append": false, 00:27:02.658 "compare": false, 00:27:02.658 "compare_and_write": false, 00:27:02.658 "abort": false, 00:27:02.658 "seek_hole": false, 00:27:02.658 "seek_data": false, 00:27:02.658 "copy": false, 00:27:02.658 "nvme_iov_md": false 00:27:02.658 }, 00:27:02.658 "memory_domains": [ 00:27:02.658 { 00:27:02.658 "dma_device_id": "system", 00:27:02.658 "dma_device_type": 1 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.658 "dma_device_type": 2 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "system", 00:27:02.658 "dma_device_type": 1 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.658 "dma_device_type": 2 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "system", 00:27:02.658 "dma_device_type": 1 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.658 "dma_device_type": 2 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "system", 00:27:02.658 "dma_device_type": 1 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.658 "dma_device_type": 2 00:27:02.658 } 00:27:02.658 ], 00:27:02.658 "driver_specific": { 00:27:02.658 "raid": { 00:27:02.658 "uuid": "778cb070-f243-461c-9634-186482f63dc6", 00:27:02.658 "strip_size_kb": 64, 00:27:02.658 "state": "online", 00:27:02.658 "raid_level": "concat", 00:27:02.658 "superblock": true, 00:27:02.658 "num_base_bdevs": 4, 00:27:02.658 "num_base_bdevs_discovered": 4, 00:27:02.658 "num_base_bdevs_operational": 4, 00:27:02.658 "base_bdevs_list": [ 00:27:02.658 { 00:27:02.658 "name": "pt1", 00:27:02.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.658 "is_configured": true, 00:27:02.658 "data_offset": 2048, 00:27:02.658 "data_size": 63488 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "name": "pt2", 00:27:02.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.658 "is_configured": true, 00:27:02.658 "data_offset": 2048, 00:27:02.658 "data_size": 63488 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "name": "pt3", 00:27:02.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.658 "is_configured": true, 00:27:02.658 "data_offset": 2048, 00:27:02.658 "data_size": 63488 00:27:02.658 }, 00:27:02.658 { 00:27:02.658 "name": "pt4", 00:27:02.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:02.658 "is_configured": true, 00:27:02.658 "data_offset": 2048, 00:27:02.658 "data_size": 63488 00:27:02.658 } 00:27:02.658 ] 00:27:02.658 } 00:27:02.658 } 00:27:02.658 }' 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:02.658 pt2 00:27:02.658 pt3 00:27:02.658 pt4' 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:02.658 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:02.917 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:02.917 "name": "pt1", 00:27:02.917 "aliases": [ 00:27:02.917 "00000000-0000-0000-0000-000000000001" 00:27:02.917 ], 00:27:02.917 "product_name": "passthru", 00:27:02.917 "block_size": 512, 00:27:02.917 "num_blocks": 65536, 00:27:02.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:02.917 "assigned_rate_limits": { 00:27:02.917 "rw_ios_per_sec": 0, 00:27:02.917 "rw_mbytes_per_sec": 0, 00:27:02.917 "r_mbytes_per_sec": 0, 00:27:02.917 "w_mbytes_per_sec": 0 00:27:02.917 }, 00:27:02.917 "claimed": true, 00:27:02.917 "claim_type": "exclusive_write", 00:27:02.917 "zoned": false, 00:27:02.917 "supported_io_types": { 00:27:02.917 "read": true, 00:27:02.917 "write": true, 00:27:02.917 "unmap": true, 00:27:02.917 "flush": true, 00:27:02.917 "reset": true, 00:27:02.917 "nvme_admin": false, 00:27:02.917 "nvme_io": false, 00:27:02.917 "nvme_io_md": false, 00:27:02.917 "write_zeroes": true, 00:27:02.917 "zcopy": true, 00:27:02.917 "get_zone_info": false, 00:27:02.917 "zone_management": false, 00:27:02.917 "zone_append": false, 00:27:02.917 "compare": false, 00:27:02.917 "compare_and_write": false, 00:27:02.917 "abort": true, 00:27:02.917 "seek_hole": false, 00:27:02.917 "seek_data": false, 00:27:02.917 "copy": true, 00:27:02.917 "nvme_iov_md": false 00:27:02.917 }, 00:27:02.917 "memory_domains": [ 00:27:02.917 { 00:27:02.917 "dma_device_id": "system", 00:27:02.917 "dma_device_type": 1 00:27:02.917 }, 00:27:02.917 { 00:27:02.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.917 "dma_device_type": 2 00:27:02.917 } 00:27:02.917 ], 00:27:02.917 "driver_specific": { 00:27:02.917 "passthru": { 00:27:02.917 "name": "pt1", 00:27:02.917 "base_bdev_name": "malloc1" 00:27:02.917 } 00:27:02.917 } 00:27:02.917 }' 00:27:02.917 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:02.917 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.176 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:03.176 14:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.176 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.434 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:03.434 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.434 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:03.434 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.693 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.693 "name": "pt2", 00:27:03.693 "aliases": [ 00:27:03.693 "00000000-0000-0000-0000-000000000002" 00:27:03.693 ], 00:27:03.693 "product_name": "passthru", 00:27:03.693 "block_size": 512, 00:27:03.693 "num_blocks": 65536, 00:27:03.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.693 "assigned_rate_limits": { 00:27:03.693 "rw_ios_per_sec": 0, 00:27:03.693 "rw_mbytes_per_sec": 0, 00:27:03.693 "r_mbytes_per_sec": 0, 00:27:03.693 "w_mbytes_per_sec": 0 00:27:03.693 }, 00:27:03.693 "claimed": true, 00:27:03.693 "claim_type": "exclusive_write", 00:27:03.694 "zoned": false, 00:27:03.694 "supported_io_types": { 00:27:03.694 "read": true, 00:27:03.694 "write": true, 00:27:03.694 "unmap": true, 00:27:03.694 "flush": true, 00:27:03.694 "reset": true, 00:27:03.694 "nvme_admin": false, 00:27:03.694 "nvme_io": false, 00:27:03.694 "nvme_io_md": false, 00:27:03.694 "write_zeroes": true, 00:27:03.694 "zcopy": true, 00:27:03.694 "get_zone_info": false, 00:27:03.694 "zone_management": false, 00:27:03.694 "zone_append": false, 00:27:03.694 "compare": false, 00:27:03.694 "compare_and_write": false, 00:27:03.694 "abort": true, 00:27:03.694 "seek_hole": false, 00:27:03.694 "seek_data": false, 00:27:03.694 "copy": true, 00:27:03.694 "nvme_iov_md": false 00:27:03.694 }, 00:27:03.694 "memory_domains": [ 00:27:03.694 { 00:27:03.694 "dma_device_id": "system", 00:27:03.694 "dma_device_type": 1 00:27:03.694 }, 00:27:03.694 { 00:27:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.694 "dma_device_type": 2 00:27:03.694 } 00:27:03.694 ], 00:27:03.694 "driver_specific": { 00:27:03.694 "passthru": { 00:27:03.694 "name": "pt2", 00:27:03.694 "base_bdev_name": "malloc2" 00:27:03.694 } 00:27:03.694 } 00:27:03.694 }' 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:03.694 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:27:03.953 14:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.211 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.211 "name": "pt3", 00:27:04.211 "aliases": [ 00:27:04.211 "00000000-0000-0000-0000-000000000003" 00:27:04.211 ], 00:27:04.211 "product_name": "passthru", 00:27:04.211 "block_size": 512, 00:27:04.211 "num_blocks": 65536, 00:27:04.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.211 "assigned_rate_limits": { 00:27:04.211 "rw_ios_per_sec": 0, 00:27:04.211 "rw_mbytes_per_sec": 0, 00:27:04.211 "r_mbytes_per_sec": 0, 00:27:04.211 "w_mbytes_per_sec": 0 00:27:04.211 }, 00:27:04.211 "claimed": true, 00:27:04.211 "claim_type": "exclusive_write", 00:27:04.211 "zoned": false, 00:27:04.211 "supported_io_types": { 00:27:04.211 "read": true, 00:27:04.211 "write": true, 00:27:04.211 "unmap": true, 00:27:04.211 "flush": true, 00:27:04.211 "reset": true, 00:27:04.211 "nvme_admin": false, 00:27:04.211 "nvme_io": false, 00:27:04.211 "nvme_io_md": false, 00:27:04.211 "write_zeroes": true, 00:27:04.211 "zcopy": true, 00:27:04.211 "get_zone_info": false, 00:27:04.211 "zone_management": false, 00:27:04.211 "zone_append": false, 00:27:04.211 "compare": false, 00:27:04.211 "compare_and_write": false, 00:27:04.211 "abort": true, 00:27:04.211 "seek_hole": false, 00:27:04.211 "seek_data": false, 00:27:04.211 "copy": true, 00:27:04.211 "nvme_iov_md": false 00:27:04.211 }, 00:27:04.211 "memory_domains": [ 00:27:04.211 { 00:27:04.211 "dma_device_id": "system", 00:27:04.211 "dma_device_type": 1 00:27:04.211 }, 00:27:04.211 { 00:27:04.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.211 "dma_device_type": 2 00:27:04.211 } 00:27:04.211 ], 00:27:04.211 "driver_specific": { 00:27:04.211 "passthru": { 00:27:04.211 "name": "pt3", 00:27:04.211 "base_bdev_name": "malloc3" 00:27:04.211 } 00:27:04.211 } 00:27:04.211 }' 00:27:04.211 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.211 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.476 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.733 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.733 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:04.733 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:27:04.733 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:04.991 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:04.991 "name": "pt4", 00:27:04.991 "aliases": [ 00:27:04.991 "00000000-0000-0000-0000-000000000004" 00:27:04.991 ], 00:27:04.991 "product_name": "passthru", 00:27:04.991 "block_size": 512, 00:27:04.991 "num_blocks": 65536, 00:27:04.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:04.991 "assigned_rate_limits": { 00:27:04.991 "rw_ios_per_sec": 0, 00:27:04.991 "rw_mbytes_per_sec": 0, 00:27:04.991 "r_mbytes_per_sec": 0, 00:27:04.991 "w_mbytes_per_sec": 0 00:27:04.991 }, 00:27:04.991 "claimed": true, 00:27:04.991 "claim_type": "exclusive_write", 00:27:04.991 "zoned": false, 00:27:04.991 "supported_io_types": { 00:27:04.991 "read": true, 00:27:04.991 "write": true, 00:27:04.991 "unmap": true, 00:27:04.991 "flush": true, 00:27:04.991 "reset": true, 00:27:04.991 "nvme_admin": false, 00:27:04.991 "nvme_io": false, 00:27:04.991 "nvme_io_md": false, 00:27:04.991 "write_zeroes": true, 00:27:04.991 "zcopy": true, 00:27:04.991 "get_zone_info": false, 00:27:04.991 "zone_management": false, 00:27:04.991 "zone_append": false, 00:27:04.991 "compare": false, 00:27:04.991 "compare_and_write": false, 00:27:04.991 "abort": true, 00:27:04.991 "seek_hole": false, 00:27:04.991 "seek_data": false, 00:27:04.991 "copy": true, 00:27:04.991 "nvme_iov_md": false 00:27:04.991 }, 00:27:04.991 "memory_domains": [ 00:27:04.991 { 00:27:04.991 "dma_device_id": "system", 00:27:04.991 "dma_device_type": 1 00:27:04.991 }, 00:27:04.991 { 00:27:04.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.991 "dma_device_type": 2 00:27:04.991 } 00:27:04.991 ], 00:27:04.991 "driver_specific": { 00:27:04.991 "passthru": { 00:27:04.991 "name": "pt4", 00:27:04.991 "base_bdev_name": "malloc4" 00:27:04.991 } 00:27:04.991 } 00:27:04.991 }' 00:27:04.992 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.992 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:04.992 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:04.992 14:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.992 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:05.249 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:27:05.507 [2024-07-25 14:09:54.532696] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 778cb070-f243-461c-9634-186482f63dc6 '!=' 778cb070-f243-461c-9634-186482f63dc6 ']' 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 139721 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 139721 ']' 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 139721 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139721 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139721' 00:27:05.766 killing process with pid 139721 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 139721 00:27:05.766 [2024-07-25 14:09:54.578929] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:05.766 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 139721 00:27:05.766 [2024-07-25 14:09:54.579152] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.766 [2024-07-25 14:09:54.579335] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.766 [2024-07-25 14:09:54.579449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:27:06.024 [2024-07-25 14:09:54.912636] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:07.399 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:27:07.399 00:27:07.399 real 0m19.907s 00:27:07.399 user 0m35.973s 00:27:07.399 sys 0m2.364s 00:27:07.399 ************************************ 00:27:07.399 END TEST raid_superblock_test 00:27:07.399 ************************************ 00:27:07.399 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.399 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.399 14:09:56 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:27:07.399 14:09:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:07.399 14:09:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:07.399 14:09:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:07.399 ************************************ 00:27:07.399 START TEST raid_read_error_test 00:27:07.399 ************************************ 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:27:07.399 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.RUvGPtROBc 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=140290 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 140290 /var/tmp/spdk-raid.sock 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 140290 ']' 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:07.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.400 14:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.400 [2024-07-25 14:09:56.192791] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:27:07.400 [2024-07-25 14:09:56.193846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140290 ] 00:27:07.400 [2024-07-25 14:09:56.366403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.658 [2024-07-25 14:09:56.607353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.916 [2024-07-25 14:09:56.814782] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:08.191 14:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.191 14:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:08.191 14:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:08.191 14:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:08.450 BaseBdev1_malloc 00:27:08.450 14:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:08.708 true 00:27:08.965 14:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:09.223 [2024-07-25 14:09:58.026458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:09.223 [2024-07-25 14:09:58.026780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.223 [2024-07-25 14:09:58.026983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:09.223 [2024-07-25 14:09:58.027148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.223 [2024-07-25 14:09:58.029925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.223 [2024-07-25 14:09:58.030127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:09.223 BaseBdev1 00:27:09.223 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:09.223 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:09.482 BaseBdev2_malloc 00:27:09.482 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:09.740 true 00:27:09.740 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:09.997 [2024-07-25 14:09:58.790581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:09.997 [2024-07-25 14:09:58.790990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.997 [2024-07-25 14:09:58.791198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:09.997 [2024-07-25 14:09:58.791374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.997 [2024-07-25 14:09:58.794203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.997 [2024-07-25 14:09:58.794408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:09.997 BaseBdev2 00:27:09.997 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:09.997 14:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:10.255 BaseBdev3_malloc 00:27:10.255 14:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:10.514 true 00:27:10.514 14:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:10.772 [2024-07-25 14:09:59.575082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:10.772 [2024-07-25 14:09:59.575449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.772 [2024-07-25 14:09:59.575674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:10.772 [2024-07-25 14:09:59.575905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.772 [2024-07-25 14:09:59.578772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.772 [2024-07-25 14:09:59.578996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:10.772 BaseBdev3 00:27:10.772 14:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:10.772 14:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:11.030 BaseBdev4_malloc 00:27:11.030 14:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:11.288 true 00:27:11.288 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:11.546 [2024-07-25 14:10:00.351863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:11.546 [2024-07-25 14:10:00.353598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.546 [2024-07-25 14:10:00.353880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:11.546 [2024-07-25 14:10:00.354055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.546 [2024-07-25 14:10:00.356774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.546 [2024-07-25 14:10:00.357025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:11.546 BaseBdev4 00:27:11.546 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:11.806 [2024-07-25 14:10:00.669559] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:11.806 [2024-07-25 14:10:00.671793] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:11.806 [2024-07-25 14:10:00.672071] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:11.806 [2024-07-25 14:10:00.672272] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:11.807 [2024-07-25 14:10:00.672662] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:27:11.807 [2024-07-25 14:10:00.672814] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:11.807 [2024-07-25 14:10:00.673042] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:11.807 [2024-07-25 14:10:00.673590] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:27:11.807 [2024-07-25 14:10:00.673726] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:27:11.807 [2024-07-25 14:10:00.674092] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.807 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.079 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.079 "name": "raid_bdev1", 00:27:12.079 "uuid": "f201ec8e-54a5-4474-ac40-aad4e902a2bf", 00:27:12.079 "strip_size_kb": 64, 00:27:12.079 "state": "online", 00:27:12.079 "raid_level": "concat", 00:27:12.079 "superblock": true, 00:27:12.079 "num_base_bdevs": 4, 00:27:12.079 "num_base_bdevs_discovered": 4, 00:27:12.079 "num_base_bdevs_operational": 4, 00:27:12.079 "base_bdevs_list": [ 00:27:12.079 { 00:27:12.079 "name": "BaseBdev1", 00:27:12.079 "uuid": "cbeee8ad-dc1e-5256-8736-88cef111c813", 00:27:12.079 "is_configured": true, 00:27:12.079 "data_offset": 2048, 00:27:12.079 "data_size": 63488 00:27:12.079 }, 00:27:12.079 { 00:27:12.079 "name": "BaseBdev2", 00:27:12.079 "uuid": "816105ce-3634-5b5c-91aa-668b2e3dcf1d", 00:27:12.079 "is_configured": true, 00:27:12.079 "data_offset": 2048, 00:27:12.079 "data_size": 63488 00:27:12.079 }, 00:27:12.079 { 00:27:12.079 "name": "BaseBdev3", 00:27:12.079 "uuid": "0dce7bca-c9c2-51f7-96ba-ec7cd6936497", 00:27:12.079 "is_configured": true, 00:27:12.079 "data_offset": 2048, 00:27:12.079 "data_size": 63488 00:27:12.079 }, 00:27:12.079 { 00:27:12.079 "name": "BaseBdev4", 00:27:12.079 "uuid": "449ab2be-9c48-553c-99b0-e03f7a2c4afc", 00:27:12.079 "is_configured": true, 00:27:12.079 "data_offset": 2048, 00:27:12.079 "data_size": 63488 00:27:12.079 } 00:27:12.079 ] 00:27:12.079 }' 00:27:12.079 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.079 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.645 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:27:12.645 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:12.902 [2024-07-25 14:10:01.735963] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:13.835 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=4 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.094 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.353 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.353 "name": "raid_bdev1", 00:27:14.353 "uuid": "f201ec8e-54a5-4474-ac40-aad4e902a2bf", 00:27:14.353 "strip_size_kb": 64, 00:27:14.353 "state": "online", 00:27:14.353 "raid_level": "concat", 00:27:14.353 "superblock": true, 00:27:14.353 "num_base_bdevs": 4, 00:27:14.353 "num_base_bdevs_discovered": 4, 00:27:14.353 "num_base_bdevs_operational": 4, 00:27:14.353 "base_bdevs_list": [ 00:27:14.353 { 00:27:14.353 "name": "BaseBdev1", 00:27:14.353 "uuid": "cbeee8ad-dc1e-5256-8736-88cef111c813", 00:27:14.353 "is_configured": true, 00:27:14.353 "data_offset": 2048, 00:27:14.353 "data_size": 63488 00:27:14.353 }, 00:27:14.353 { 00:27:14.353 "name": "BaseBdev2", 00:27:14.353 "uuid": "816105ce-3634-5b5c-91aa-668b2e3dcf1d", 00:27:14.353 "is_configured": true, 00:27:14.353 "data_offset": 2048, 00:27:14.353 "data_size": 63488 00:27:14.353 }, 00:27:14.353 { 00:27:14.353 "name": "BaseBdev3", 00:27:14.354 "uuid": "0dce7bca-c9c2-51f7-96ba-ec7cd6936497", 00:27:14.354 "is_configured": true, 00:27:14.354 "data_offset": 2048, 00:27:14.354 "data_size": 63488 00:27:14.354 }, 00:27:14.354 { 00:27:14.354 "name": "BaseBdev4", 00:27:14.354 "uuid": "449ab2be-9c48-553c-99b0-e03f7a2c4afc", 00:27:14.354 "is_configured": true, 00:27:14.354 "data_offset": 2048, 00:27:14.354 "data_size": 63488 00:27:14.354 } 00:27:14.354 ] 00:27:14.354 }' 00:27:14.354 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.354 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:15.177 [2024-07-25 14:10:04.153504] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:15.177 [2024-07-25 14:10:04.153897] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:15.177 [2024-07-25 14:10:04.157097] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:15.177 [2024-07-25 14:10:04.157334] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.177 [2024-07-25 14:10:04.157563] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:15.177 [2024-07-25 14:10:04.157700] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:27:15.177 0 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 140290 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 140290 ']' 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 140290 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140290 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140290' 00:27:15.177 killing process with pid 140290 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 140290 00:27:15.177 [2024-07-25 14:10:04.195679] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:15.177 14:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 140290 00:27:15.434 [2024-07-25 14:10:04.470317] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.RUvGPtROBc 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:27:16.807 ************************************ 00:27:16.807 END TEST raid_read_error_test 00:27:16.807 ************************************ 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:27:16.807 00:27:16.807 real 0m9.597s 00:27:16.807 user 0m15.049s 00:27:16.807 sys 0m1.038s 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:16.807 14:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.807 14:10:05 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:27:16.807 14:10:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:16.807 14:10:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:16.807 14:10:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:16.807 ************************************ 00:27:16.807 START TEST raid_write_error_test 00:27:16.807 ************************************ 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=concat 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' concat '!=' raid1 ']' 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@889 -- # strip_size=64 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@890 -- # create_arg+=' -z 64' 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.WVkdbPBoAP 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=140520 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 140520 /var/tmp/spdk-raid.sock 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 140520 ']' 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:16.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:16.807 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.065 [2024-07-25 14:10:05.851180] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:27:17.065 [2024-07-25 14:10:05.852628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140520 ] 00:27:17.065 [2024-07-25 14:10:06.023931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.323 [2024-07-25 14:10:06.233201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.581 [2024-07-25 14:10:06.432002] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:17.839 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.839 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:17.839 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:17.839 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:18.097 BaseBdev1_malloc 00:27:18.355 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:18.355 true 00:27:18.355 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:18.613 [2024-07-25 14:10:07.629926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:18.613 [2024-07-25 14:10:07.630443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.613 [2024-07-25 14:10:07.630667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:18.613 [2024-07-25 14:10:07.630804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.613 [2024-07-25 14:10:07.633565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.613 [2024-07-25 14:10:07.633816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:18.613 BaseBdev1 00:27:18.613 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:18.613 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:19.216 BaseBdev2_malloc 00:27:19.216 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:19.216 true 00:27:19.216 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:19.474 [2024-07-25 14:10:08.444129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:19.474 [2024-07-25 14:10:08.444469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.474 [2024-07-25 14:10:08.444669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:19.474 [2024-07-25 14:10:08.444805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.474 [2024-07-25 14:10:08.447511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.474 [2024-07-25 14:10:08.447739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:19.474 BaseBdev2 00:27:19.474 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:19.474 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:19.732 BaseBdev3_malloc 00:27:19.732 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:20.298 true 00:27:20.298 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:20.298 [2024-07-25 14:10:09.272306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:20.298 [2024-07-25 14:10:09.272631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.298 [2024-07-25 14:10:09.272712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:20.298 [2024-07-25 14:10:09.272984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.298 [2024-07-25 14:10:09.275703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.298 [2024-07-25 14:10:09.275902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:20.298 BaseBdev3 00:27:20.298 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:27:20.298 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:20.556 BaseBdev4_malloc 00:27:20.556 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:20.814 true 00:27:20.814 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:21.381 [2024-07-25 14:10:10.129237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:21.381 [2024-07-25 14:10:10.129614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.381 [2024-07-25 14:10:10.129869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:21.381 [2024-07-25 14:10:10.130029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.381 [2024-07-25 14:10:10.132995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.381 [2024-07-25 14:10:10.133203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:21.381 BaseBdev4 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:21.381 [2024-07-25 14:10:10.381693] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:21.381 [2024-07-25 14:10:10.384371] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:21.381 [2024-07-25 14:10:10.384698] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:21.381 [2024-07-25 14:10:10.384908] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:21.381 [2024-07-25 14:10:10.385358] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:27:21.381 [2024-07-25 14:10:10.385546] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:21.381 [2024-07-25 14:10:10.385720] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:21.381 [2024-07-25 14:10:10.386260] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:27:21.381 [2024-07-25 14:10:10.386416] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:27:21.381 [2024-07-25 14:10:10.386790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.381 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.639 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:21.639 "name": "raid_bdev1", 00:27:21.639 "uuid": "5175b3f7-3c78-4d22-8c12-1b2b5d83308b", 00:27:21.639 "strip_size_kb": 64, 00:27:21.639 "state": "online", 00:27:21.639 "raid_level": "concat", 00:27:21.639 "superblock": true, 00:27:21.640 "num_base_bdevs": 4, 00:27:21.640 "num_base_bdevs_discovered": 4, 00:27:21.640 "num_base_bdevs_operational": 4, 00:27:21.640 "base_bdevs_list": [ 00:27:21.640 { 00:27:21.640 "name": "BaseBdev1", 00:27:21.640 "uuid": "4b19a585-b693-57d5-ba34-6ddd9f64d27f", 00:27:21.640 "is_configured": true, 00:27:21.640 "data_offset": 2048, 00:27:21.640 "data_size": 63488 00:27:21.640 }, 00:27:21.640 { 00:27:21.640 "name": "BaseBdev2", 00:27:21.640 "uuid": "a12ae024-0215-5f85-a68c-8b518136492e", 00:27:21.640 "is_configured": true, 00:27:21.640 "data_offset": 2048, 00:27:21.640 "data_size": 63488 00:27:21.640 }, 00:27:21.640 { 00:27:21.640 "name": "BaseBdev3", 00:27:21.640 "uuid": "d659f03c-64e6-56b1-89cd-93d469f96bb6", 00:27:21.640 "is_configured": true, 00:27:21.640 "data_offset": 2048, 00:27:21.640 "data_size": 63488 00:27:21.640 }, 00:27:21.640 { 00:27:21.640 "name": "BaseBdev4", 00:27:21.640 "uuid": "d1924f95-92e8-558b-8ef8-3dd863701241", 00:27:21.640 "is_configured": true, 00:27:21.640 "data_offset": 2048, 00:27:21.640 "data_size": 63488 00:27:21.640 } 00:27:21.640 ] 00:27:21.640 }' 00:27:21.640 14:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:21.640 14:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.573 14:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:27:22.573 14:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:22.573 [2024-07-25 14:10:11.408734] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:23.507 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ concat = \r\a\i\d\1 ]] 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=4 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.765 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.023 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.023 "name": "raid_bdev1", 00:27:24.023 "uuid": "5175b3f7-3c78-4d22-8c12-1b2b5d83308b", 00:27:24.023 "strip_size_kb": 64, 00:27:24.023 "state": "online", 00:27:24.023 "raid_level": "concat", 00:27:24.023 "superblock": true, 00:27:24.023 "num_base_bdevs": 4, 00:27:24.023 "num_base_bdevs_discovered": 4, 00:27:24.023 "num_base_bdevs_operational": 4, 00:27:24.023 "base_bdevs_list": [ 00:27:24.023 { 00:27:24.023 "name": "BaseBdev1", 00:27:24.023 "uuid": "4b19a585-b693-57d5-ba34-6ddd9f64d27f", 00:27:24.023 "is_configured": true, 00:27:24.023 "data_offset": 2048, 00:27:24.023 "data_size": 63488 00:27:24.023 }, 00:27:24.023 { 00:27:24.023 "name": "BaseBdev2", 00:27:24.023 "uuid": "a12ae024-0215-5f85-a68c-8b518136492e", 00:27:24.023 "is_configured": true, 00:27:24.023 "data_offset": 2048, 00:27:24.023 "data_size": 63488 00:27:24.023 }, 00:27:24.023 { 00:27:24.023 "name": "BaseBdev3", 00:27:24.023 "uuid": "d659f03c-64e6-56b1-89cd-93d469f96bb6", 00:27:24.023 "is_configured": true, 00:27:24.023 "data_offset": 2048, 00:27:24.023 "data_size": 63488 00:27:24.023 }, 00:27:24.023 { 00:27:24.023 "name": "BaseBdev4", 00:27:24.023 "uuid": "d1924f95-92e8-558b-8ef8-3dd863701241", 00:27:24.023 "is_configured": true, 00:27:24.023 "data_offset": 2048, 00:27:24.023 "data_size": 63488 00:27:24.023 } 00:27:24.023 ] 00:27:24.023 }' 00:27:24.023 14:10:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.023 14:10:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:24.589 14:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:24.847 [2024-07-25 14:10:13.846945] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:24.847 [2024-07-25 14:10:13.847153] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.847 [2024-07-25 14:10:13.850459] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.847 [2024-07-25 14:10:13.850656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.847 [2024-07-25 14:10:13.850819] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:24.847 [2024-07-25 14:10:13.850930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:27:24.847 0 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 140520 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 140520 ']' 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 140520 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140520 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.847 killing process with pid 140520 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140520' 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 140520 00:27:24.847 14:10:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 140520 00:27:24.847 [2024-07-25 14:10:13.886041] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:25.413 [2024-07-25 14:10:14.164637] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.WVkdbPBoAP 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:27:26.346 ************************************ 00:27:26.346 END TEST raid_write_error_test 00:27:26.346 ************************************ 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.41 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy concat 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@937 -- # [[ 0.41 != \0\.\0\0 ]] 00:27:26.346 00:27:26.346 real 0m9.616s 00:27:26.346 user 0m14.954s 00:27:26.346 sys 0m1.149s 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:26.346 14:10:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.603 14:10:15 bdev_raid -- bdev/bdev_raid.sh@1020 -- # for level in raid0 concat raid1 00:27:26.603 14:10:15 bdev_raid -- bdev/bdev_raid.sh@1021 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:27:26.603 14:10:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:26.603 14:10:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:26.603 14:10:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:26.603 ************************************ 00:27:26.603 START TEST raid_state_function_test 00:27:26.603 ************************************ 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:26.603 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=140732 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 140732' 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:26.604 Process raid pid: 140732 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 140732 /var/tmp/spdk-raid.sock 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 140732 ']' 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:26.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.604 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.604 [2024-07-25 14:10:15.507287] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:27:26.604 [2024-07-25 14:10:15.507631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.861 [2024-07-25 14:10:15.667622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.861 [2024-07-25 14:10:15.882225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.119 [2024-07-25 14:10:16.085587] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.683 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.683 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:27:27.683 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:27.942 [2024-07-25 14:10:16.834368] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:27.942 [2024-07-25 14:10:16.834658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:27.942 [2024-07-25 14:10:16.834782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:27.942 [2024-07-25 14:10:16.834852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:27.942 [2024-07-25 14:10:16.835049] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:27.942 [2024-07-25 14:10:16.835114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:27.942 [2024-07-25 14:10:16.835265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:27.942 [2024-07-25 14:10:16.835339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.942 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.200 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.200 "name": "Existed_Raid", 00:27:28.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.200 "strip_size_kb": 0, 00:27:28.200 "state": "configuring", 00:27:28.200 "raid_level": "raid1", 00:27:28.200 "superblock": false, 00:27:28.200 "num_base_bdevs": 4, 00:27:28.200 "num_base_bdevs_discovered": 0, 00:27:28.200 "num_base_bdevs_operational": 4, 00:27:28.200 "base_bdevs_list": [ 00:27:28.200 { 00:27:28.200 "name": "BaseBdev1", 00:27:28.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.200 "is_configured": false, 00:27:28.200 "data_offset": 0, 00:27:28.200 "data_size": 0 00:27:28.200 }, 00:27:28.200 { 00:27:28.200 "name": "BaseBdev2", 00:27:28.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.200 "is_configured": false, 00:27:28.200 "data_offset": 0, 00:27:28.200 "data_size": 0 00:27:28.200 }, 00:27:28.200 { 00:27:28.200 "name": "BaseBdev3", 00:27:28.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.200 "is_configured": false, 00:27:28.200 "data_offset": 0, 00:27:28.200 "data_size": 0 00:27:28.200 }, 00:27:28.200 { 00:27:28.200 "name": "BaseBdev4", 00:27:28.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.200 "is_configured": false, 00:27:28.200 "data_offset": 0, 00:27:28.200 "data_size": 0 00:27:28.200 } 00:27:28.200 ] 00:27:28.200 }' 00:27:28.200 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.200 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.133 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:29.133 [2024-07-25 14:10:18.150698] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:29.133 [2024-07-25 14:10:18.150937] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:27:29.133 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:29.699 [2024-07-25 14:10:18.466847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:29.699 [2024-07-25 14:10:18.467167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:29.699 [2024-07-25 14:10:18.467285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:29.699 [2024-07-25 14:10:18.467379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:29.699 [2024-07-25 14:10:18.467540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:29.699 [2024-07-25 14:10:18.467625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:29.699 [2024-07-25 14:10:18.467794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:29.699 [2024-07-25 14:10:18.467859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:29.699 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:29.957 [2024-07-25 14:10:18.776699] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:29.957 BaseBdev1 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:29.957 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:30.215 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:30.474 [ 00:27:30.474 { 00:27:30.474 "name": "BaseBdev1", 00:27:30.474 "aliases": [ 00:27:30.474 "b8749bfe-4721-4937-96ce-a8cc450b33e8" 00:27:30.474 ], 00:27:30.474 "product_name": "Malloc disk", 00:27:30.474 "block_size": 512, 00:27:30.474 "num_blocks": 65536, 00:27:30.474 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:30.474 "assigned_rate_limits": { 00:27:30.474 "rw_ios_per_sec": 0, 00:27:30.474 "rw_mbytes_per_sec": 0, 00:27:30.474 "r_mbytes_per_sec": 0, 00:27:30.474 "w_mbytes_per_sec": 0 00:27:30.474 }, 00:27:30.474 "claimed": true, 00:27:30.474 "claim_type": "exclusive_write", 00:27:30.474 "zoned": false, 00:27:30.474 "supported_io_types": { 00:27:30.474 "read": true, 00:27:30.474 "write": true, 00:27:30.474 "unmap": true, 00:27:30.474 "flush": true, 00:27:30.474 "reset": true, 00:27:30.474 "nvme_admin": false, 00:27:30.474 "nvme_io": false, 00:27:30.474 "nvme_io_md": false, 00:27:30.474 "write_zeroes": true, 00:27:30.474 "zcopy": true, 00:27:30.474 "get_zone_info": false, 00:27:30.474 "zone_management": false, 00:27:30.474 "zone_append": false, 00:27:30.474 "compare": false, 00:27:30.474 "compare_and_write": false, 00:27:30.474 "abort": true, 00:27:30.474 "seek_hole": false, 00:27:30.474 "seek_data": false, 00:27:30.474 "copy": true, 00:27:30.474 "nvme_iov_md": false 00:27:30.474 }, 00:27:30.474 "memory_domains": [ 00:27:30.474 { 00:27:30.474 "dma_device_id": "system", 00:27:30.474 "dma_device_type": 1 00:27:30.474 }, 00:27:30.474 { 00:27:30.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.474 "dma_device_type": 2 00:27:30.474 } 00:27:30.474 ], 00:27:30.474 "driver_specific": {} 00:27:30.474 } 00:27:30.474 ] 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.474 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.733 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:30.733 "name": "Existed_Raid", 00:27:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.733 "strip_size_kb": 0, 00:27:30.733 "state": "configuring", 00:27:30.733 "raid_level": "raid1", 00:27:30.733 "superblock": false, 00:27:30.733 "num_base_bdevs": 4, 00:27:30.733 "num_base_bdevs_discovered": 1, 00:27:30.733 "num_base_bdevs_operational": 4, 00:27:30.733 "base_bdevs_list": [ 00:27:30.733 { 00:27:30.733 "name": "BaseBdev1", 00:27:30.733 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:30.733 "is_configured": true, 00:27:30.733 "data_offset": 0, 00:27:30.733 "data_size": 65536 00:27:30.733 }, 00:27:30.733 { 00:27:30.733 "name": "BaseBdev2", 00:27:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.733 "is_configured": false, 00:27:30.733 "data_offset": 0, 00:27:30.733 "data_size": 0 00:27:30.733 }, 00:27:30.733 { 00:27:30.733 "name": "BaseBdev3", 00:27:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.733 "is_configured": false, 00:27:30.733 "data_offset": 0, 00:27:30.733 "data_size": 0 00:27:30.733 }, 00:27:30.733 { 00:27:30.733 "name": "BaseBdev4", 00:27:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.733 "is_configured": false, 00:27:30.733 "data_offset": 0, 00:27:30.733 "data_size": 0 00:27:30.733 } 00:27:30.733 ] 00:27:30.733 }' 00:27:30.733 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:30.733 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.300 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:31.558 [2024-07-25 14:10:20.485307] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:31.558 [2024-07-25 14:10:20.485539] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:27:31.558 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:31.816 [2024-07-25 14:10:20.817525] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:31.817 [2024-07-25 14:10:20.820097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:31.817 [2024-07-25 14:10:20.820299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:31.817 [2024-07-25 14:10:20.820411] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:31.817 [2024-07-25 14:10:20.820480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:31.817 [2024-07-25 14:10:20.820578] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:31.817 [2024-07-25 14:10:20.820726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.817 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.074 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:32.074 "name": "Existed_Raid", 00:27:32.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.074 "strip_size_kb": 0, 00:27:32.074 "state": "configuring", 00:27:32.074 "raid_level": "raid1", 00:27:32.074 "superblock": false, 00:27:32.074 "num_base_bdevs": 4, 00:27:32.074 "num_base_bdevs_discovered": 1, 00:27:32.074 "num_base_bdevs_operational": 4, 00:27:32.074 "base_bdevs_list": [ 00:27:32.074 { 00:27:32.074 "name": "BaseBdev1", 00:27:32.074 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:32.074 "is_configured": true, 00:27:32.074 "data_offset": 0, 00:27:32.074 "data_size": 65536 00:27:32.074 }, 00:27:32.074 { 00:27:32.074 "name": "BaseBdev2", 00:27:32.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.074 "is_configured": false, 00:27:32.074 "data_offset": 0, 00:27:32.074 "data_size": 0 00:27:32.074 }, 00:27:32.074 { 00:27:32.074 "name": "BaseBdev3", 00:27:32.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.074 "is_configured": false, 00:27:32.074 "data_offset": 0, 00:27:32.074 "data_size": 0 00:27:32.074 }, 00:27:32.074 { 00:27:32.074 "name": "BaseBdev4", 00:27:32.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.074 "is_configured": false, 00:27:32.074 "data_offset": 0, 00:27:32.074 "data_size": 0 00:27:32.074 } 00:27:32.074 ] 00:27:32.074 }' 00:27:32.074 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:32.074 14:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.006 14:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:33.265 [2024-07-25 14:10:22.090041] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:33.265 BaseBdev2 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:33.265 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:33.524 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:33.782 [ 00:27:33.782 { 00:27:33.782 "name": "BaseBdev2", 00:27:33.782 "aliases": [ 00:27:33.782 "6d8f8266-14a7-4a92-b554-1caf6bdd24e2" 00:27:33.782 ], 00:27:33.782 "product_name": "Malloc disk", 00:27:33.782 "block_size": 512, 00:27:33.782 "num_blocks": 65536, 00:27:33.782 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:33.782 "assigned_rate_limits": { 00:27:33.782 "rw_ios_per_sec": 0, 00:27:33.782 "rw_mbytes_per_sec": 0, 00:27:33.782 "r_mbytes_per_sec": 0, 00:27:33.782 "w_mbytes_per_sec": 0 00:27:33.782 }, 00:27:33.782 "claimed": true, 00:27:33.782 "claim_type": "exclusive_write", 00:27:33.782 "zoned": false, 00:27:33.782 "supported_io_types": { 00:27:33.782 "read": true, 00:27:33.782 "write": true, 00:27:33.782 "unmap": true, 00:27:33.782 "flush": true, 00:27:33.782 "reset": true, 00:27:33.782 "nvme_admin": false, 00:27:33.782 "nvme_io": false, 00:27:33.782 "nvme_io_md": false, 00:27:33.782 "write_zeroes": true, 00:27:33.782 "zcopy": true, 00:27:33.782 "get_zone_info": false, 00:27:33.782 "zone_management": false, 00:27:33.782 "zone_append": false, 00:27:33.782 "compare": false, 00:27:33.782 "compare_and_write": false, 00:27:33.782 "abort": true, 00:27:33.782 "seek_hole": false, 00:27:33.782 "seek_data": false, 00:27:33.782 "copy": true, 00:27:33.782 "nvme_iov_md": false 00:27:33.782 }, 00:27:33.782 "memory_domains": [ 00:27:33.782 { 00:27:33.782 "dma_device_id": "system", 00:27:33.782 "dma_device_type": 1 00:27:33.782 }, 00:27:33.782 { 00:27:33.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.783 "dma_device_type": 2 00:27:33.783 } 00:27:33.783 ], 00:27:33.783 "driver_specific": {} 00:27:33.783 } 00:27:33.783 ] 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.783 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.041 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.041 "name": "Existed_Raid", 00:27:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.041 "strip_size_kb": 0, 00:27:34.041 "state": "configuring", 00:27:34.041 "raid_level": "raid1", 00:27:34.041 "superblock": false, 00:27:34.041 "num_base_bdevs": 4, 00:27:34.041 "num_base_bdevs_discovered": 2, 00:27:34.041 "num_base_bdevs_operational": 4, 00:27:34.041 "base_bdevs_list": [ 00:27:34.041 { 00:27:34.041 "name": "BaseBdev1", 00:27:34.041 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:34.041 "is_configured": true, 00:27:34.041 "data_offset": 0, 00:27:34.041 "data_size": 65536 00:27:34.041 }, 00:27:34.041 { 00:27:34.041 "name": "BaseBdev2", 00:27:34.041 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:34.042 "is_configured": true, 00:27:34.042 "data_offset": 0, 00:27:34.042 "data_size": 65536 00:27:34.042 }, 00:27:34.042 { 00:27:34.042 "name": "BaseBdev3", 00:27:34.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.042 "is_configured": false, 00:27:34.042 "data_offset": 0, 00:27:34.042 "data_size": 0 00:27:34.042 }, 00:27:34.042 { 00:27:34.042 "name": "BaseBdev4", 00:27:34.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.042 "is_configured": false, 00:27:34.042 "data_offset": 0, 00:27:34.042 "data_size": 0 00:27:34.042 } 00:27:34.042 ] 00:27:34.042 }' 00:27:34.042 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.042 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.610 14:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:34.869 [2024-07-25 14:10:23.879768] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:34.869 BaseBdev3 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:34.869 14:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:35.127 14:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:35.695 [ 00:27:35.695 { 00:27:35.695 "name": "BaseBdev3", 00:27:35.695 "aliases": [ 00:27:35.695 "bc01e869-ed81-4c82-bd25-ea4d7c96110f" 00:27:35.695 ], 00:27:35.695 "product_name": "Malloc disk", 00:27:35.695 "block_size": 512, 00:27:35.695 "num_blocks": 65536, 00:27:35.695 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:35.695 "assigned_rate_limits": { 00:27:35.695 "rw_ios_per_sec": 0, 00:27:35.695 "rw_mbytes_per_sec": 0, 00:27:35.695 "r_mbytes_per_sec": 0, 00:27:35.695 "w_mbytes_per_sec": 0 00:27:35.695 }, 00:27:35.695 "claimed": true, 00:27:35.695 "claim_type": "exclusive_write", 00:27:35.695 "zoned": false, 00:27:35.695 "supported_io_types": { 00:27:35.695 "read": true, 00:27:35.695 "write": true, 00:27:35.695 "unmap": true, 00:27:35.695 "flush": true, 00:27:35.695 "reset": true, 00:27:35.695 "nvme_admin": false, 00:27:35.695 "nvme_io": false, 00:27:35.695 "nvme_io_md": false, 00:27:35.695 "write_zeroes": true, 00:27:35.695 "zcopy": true, 00:27:35.695 "get_zone_info": false, 00:27:35.695 "zone_management": false, 00:27:35.695 "zone_append": false, 00:27:35.695 "compare": false, 00:27:35.695 "compare_and_write": false, 00:27:35.695 "abort": true, 00:27:35.695 "seek_hole": false, 00:27:35.695 "seek_data": false, 00:27:35.695 "copy": true, 00:27:35.695 "nvme_iov_md": false 00:27:35.695 }, 00:27:35.695 "memory_domains": [ 00:27:35.695 { 00:27:35.695 "dma_device_id": "system", 00:27:35.695 "dma_device_type": 1 00:27:35.695 }, 00:27:35.695 { 00:27:35.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.695 "dma_device_type": 2 00:27:35.695 } 00:27:35.695 ], 00:27:35.695 "driver_specific": {} 00:27:35.695 } 00:27:35.695 ] 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:35.695 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.695 "name": "Existed_Raid", 00:27:35.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.695 "strip_size_kb": 0, 00:27:35.695 "state": "configuring", 00:27:35.695 "raid_level": "raid1", 00:27:35.695 "superblock": false, 00:27:35.695 "num_base_bdevs": 4, 00:27:35.695 "num_base_bdevs_discovered": 3, 00:27:35.695 "num_base_bdevs_operational": 4, 00:27:35.695 "base_bdevs_list": [ 00:27:35.695 { 00:27:35.695 "name": "BaseBdev1", 00:27:35.695 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:35.695 "is_configured": true, 00:27:35.695 "data_offset": 0, 00:27:35.695 "data_size": 65536 00:27:35.695 }, 00:27:35.695 { 00:27:35.695 "name": "BaseBdev2", 00:27:35.695 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:35.695 "is_configured": true, 00:27:35.695 "data_offset": 0, 00:27:35.695 "data_size": 65536 00:27:35.696 }, 00:27:35.696 { 00:27:35.696 "name": "BaseBdev3", 00:27:35.696 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:35.696 "is_configured": true, 00:27:35.696 "data_offset": 0, 00:27:35.696 "data_size": 65536 00:27:35.696 }, 00:27:35.696 { 00:27:35.696 "name": "BaseBdev4", 00:27:35.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.696 "is_configured": false, 00:27:35.696 "data_offset": 0, 00:27:35.696 "data_size": 0 00:27:35.696 } 00:27:35.696 ] 00:27:35.696 }' 00:27:35.696 14:10:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.696 14:10:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:36.665 [2024-07-25 14:10:25.671030] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:36.665 [2024-07-25 14:10:25.671440] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:27:36.665 [2024-07-25 14:10:25.671553] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:36.665 [2024-07-25 14:10:25.671759] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:27:36.665 [2024-07-25 14:10:25.672303] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:27:36.665 [2024-07-25 14:10:25.672442] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:27:36.665 [2024-07-25 14:10:25.672910] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.665 BaseBdev4 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:36.665 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:37.231 14:10:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:37.231 [ 00:27:37.231 { 00:27:37.231 "name": "BaseBdev4", 00:27:37.231 "aliases": [ 00:27:37.231 "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9" 00:27:37.231 ], 00:27:37.231 "product_name": "Malloc disk", 00:27:37.231 "block_size": 512, 00:27:37.231 "num_blocks": 65536, 00:27:37.231 "uuid": "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9", 00:27:37.231 "assigned_rate_limits": { 00:27:37.231 "rw_ios_per_sec": 0, 00:27:37.231 "rw_mbytes_per_sec": 0, 00:27:37.231 "r_mbytes_per_sec": 0, 00:27:37.231 "w_mbytes_per_sec": 0 00:27:37.231 }, 00:27:37.231 "claimed": true, 00:27:37.231 "claim_type": "exclusive_write", 00:27:37.231 "zoned": false, 00:27:37.231 "supported_io_types": { 00:27:37.231 "read": true, 00:27:37.231 "write": true, 00:27:37.231 "unmap": true, 00:27:37.231 "flush": true, 00:27:37.231 "reset": true, 00:27:37.231 "nvme_admin": false, 00:27:37.231 "nvme_io": false, 00:27:37.231 "nvme_io_md": false, 00:27:37.231 "write_zeroes": true, 00:27:37.231 "zcopy": true, 00:27:37.231 "get_zone_info": false, 00:27:37.231 "zone_management": false, 00:27:37.231 "zone_append": false, 00:27:37.231 "compare": false, 00:27:37.231 "compare_and_write": false, 00:27:37.231 "abort": true, 00:27:37.231 "seek_hole": false, 00:27:37.231 "seek_data": false, 00:27:37.231 "copy": true, 00:27:37.231 "nvme_iov_md": false 00:27:37.231 }, 00:27:37.231 "memory_domains": [ 00:27:37.231 { 00:27:37.231 "dma_device_id": "system", 00:27:37.231 "dma_device_type": 1 00:27:37.231 }, 00:27:37.231 { 00:27:37.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:37.231 "dma_device_type": 2 00:27:37.231 } 00:27:37.231 ], 00:27:37.231 "driver_specific": {} 00:27:37.231 } 00:27:37.231 ] 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.231 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:37.490 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.490 "name": "Existed_Raid", 00:27:37.490 "uuid": "e215d991-19c4-47e5-b6d3-86dfd237981d", 00:27:37.490 "strip_size_kb": 0, 00:27:37.490 "state": "online", 00:27:37.490 "raid_level": "raid1", 00:27:37.490 "superblock": false, 00:27:37.490 "num_base_bdevs": 4, 00:27:37.490 "num_base_bdevs_discovered": 4, 00:27:37.490 "num_base_bdevs_operational": 4, 00:27:37.490 "base_bdevs_list": [ 00:27:37.490 { 00:27:37.490 "name": "BaseBdev1", 00:27:37.490 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:37.490 "is_configured": true, 00:27:37.490 "data_offset": 0, 00:27:37.490 "data_size": 65536 00:27:37.490 }, 00:27:37.490 { 00:27:37.490 "name": "BaseBdev2", 00:27:37.490 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:37.490 "is_configured": true, 00:27:37.490 "data_offset": 0, 00:27:37.490 "data_size": 65536 00:27:37.490 }, 00:27:37.490 { 00:27:37.490 "name": "BaseBdev3", 00:27:37.490 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:37.490 "is_configured": true, 00:27:37.490 "data_offset": 0, 00:27:37.490 "data_size": 65536 00:27:37.490 }, 00:27:37.490 { 00:27:37.490 "name": "BaseBdev4", 00:27:37.490 "uuid": "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9", 00:27:37.490 "is_configured": true, 00:27:37.490 "data_offset": 0, 00:27:37.490 "data_size": 65536 00:27:37.490 } 00:27:37.490 ] 00:27:37.490 }' 00:27:37.490 14:10:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.490 14:10:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:38.426 [2024-07-25 14:10:27.363874] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:38.426 "name": "Existed_Raid", 00:27:38.426 "aliases": [ 00:27:38.426 "e215d991-19c4-47e5-b6d3-86dfd237981d" 00:27:38.426 ], 00:27:38.426 "product_name": "Raid Volume", 00:27:38.426 "block_size": 512, 00:27:38.426 "num_blocks": 65536, 00:27:38.426 "uuid": "e215d991-19c4-47e5-b6d3-86dfd237981d", 00:27:38.426 "assigned_rate_limits": { 00:27:38.426 "rw_ios_per_sec": 0, 00:27:38.426 "rw_mbytes_per_sec": 0, 00:27:38.426 "r_mbytes_per_sec": 0, 00:27:38.426 "w_mbytes_per_sec": 0 00:27:38.426 }, 00:27:38.426 "claimed": false, 00:27:38.426 "zoned": false, 00:27:38.426 "supported_io_types": { 00:27:38.426 "read": true, 00:27:38.426 "write": true, 00:27:38.426 "unmap": false, 00:27:38.426 "flush": false, 00:27:38.426 "reset": true, 00:27:38.426 "nvme_admin": false, 00:27:38.426 "nvme_io": false, 00:27:38.426 "nvme_io_md": false, 00:27:38.426 "write_zeroes": true, 00:27:38.426 "zcopy": false, 00:27:38.426 "get_zone_info": false, 00:27:38.426 "zone_management": false, 00:27:38.426 "zone_append": false, 00:27:38.426 "compare": false, 00:27:38.426 "compare_and_write": false, 00:27:38.426 "abort": false, 00:27:38.426 "seek_hole": false, 00:27:38.426 "seek_data": false, 00:27:38.426 "copy": false, 00:27:38.426 "nvme_iov_md": false 00:27:38.426 }, 00:27:38.426 "memory_domains": [ 00:27:38.426 { 00:27:38.426 "dma_device_id": "system", 00:27:38.426 "dma_device_type": 1 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.426 "dma_device_type": 2 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "system", 00:27:38.426 "dma_device_type": 1 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.426 "dma_device_type": 2 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "system", 00:27:38.426 "dma_device_type": 1 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.426 "dma_device_type": 2 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "system", 00:27:38.426 "dma_device_type": 1 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.426 "dma_device_type": 2 00:27:38.426 } 00:27:38.426 ], 00:27:38.426 "driver_specific": { 00:27:38.426 "raid": { 00:27:38.426 "uuid": "e215d991-19c4-47e5-b6d3-86dfd237981d", 00:27:38.426 "strip_size_kb": 0, 00:27:38.426 "state": "online", 00:27:38.426 "raid_level": "raid1", 00:27:38.426 "superblock": false, 00:27:38.426 "num_base_bdevs": 4, 00:27:38.426 "num_base_bdevs_discovered": 4, 00:27:38.426 "num_base_bdevs_operational": 4, 00:27:38.426 "base_bdevs_list": [ 00:27:38.426 { 00:27:38.426 "name": "BaseBdev1", 00:27:38.426 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:38.426 "is_configured": true, 00:27:38.426 "data_offset": 0, 00:27:38.426 "data_size": 65536 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "name": "BaseBdev2", 00:27:38.426 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:38.426 "is_configured": true, 00:27:38.426 "data_offset": 0, 00:27:38.426 "data_size": 65536 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "name": "BaseBdev3", 00:27:38.426 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:38.426 "is_configured": true, 00:27:38.426 "data_offset": 0, 00:27:38.426 "data_size": 65536 00:27:38.426 }, 00:27:38.426 { 00:27:38.426 "name": "BaseBdev4", 00:27:38.426 "uuid": "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9", 00:27:38.426 "is_configured": true, 00:27:38.426 "data_offset": 0, 00:27:38.426 "data_size": 65536 00:27:38.426 } 00:27:38.426 ] 00:27:38.426 } 00:27:38.426 } 00:27:38.426 }' 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:38.426 BaseBdev2 00:27:38.426 BaseBdev3 00:27:38.426 BaseBdev4' 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:38.426 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:38.684 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:38.684 "name": "BaseBdev1", 00:27:38.684 "aliases": [ 00:27:38.684 "b8749bfe-4721-4937-96ce-a8cc450b33e8" 00:27:38.684 ], 00:27:38.684 "product_name": "Malloc disk", 00:27:38.684 "block_size": 512, 00:27:38.684 "num_blocks": 65536, 00:27:38.684 "uuid": "b8749bfe-4721-4937-96ce-a8cc450b33e8", 00:27:38.684 "assigned_rate_limits": { 00:27:38.684 "rw_ios_per_sec": 0, 00:27:38.684 "rw_mbytes_per_sec": 0, 00:27:38.684 "r_mbytes_per_sec": 0, 00:27:38.684 "w_mbytes_per_sec": 0 00:27:38.684 }, 00:27:38.684 "claimed": true, 00:27:38.684 "claim_type": "exclusive_write", 00:27:38.685 "zoned": false, 00:27:38.685 "supported_io_types": { 00:27:38.685 "read": true, 00:27:38.685 "write": true, 00:27:38.685 "unmap": true, 00:27:38.685 "flush": true, 00:27:38.685 "reset": true, 00:27:38.685 "nvme_admin": false, 00:27:38.685 "nvme_io": false, 00:27:38.685 "nvme_io_md": false, 00:27:38.685 "write_zeroes": true, 00:27:38.685 "zcopy": true, 00:27:38.685 "get_zone_info": false, 00:27:38.685 "zone_management": false, 00:27:38.685 "zone_append": false, 00:27:38.685 "compare": false, 00:27:38.685 "compare_and_write": false, 00:27:38.685 "abort": true, 00:27:38.685 "seek_hole": false, 00:27:38.685 "seek_data": false, 00:27:38.685 "copy": true, 00:27:38.685 "nvme_iov_md": false 00:27:38.685 }, 00:27:38.685 "memory_domains": [ 00:27:38.685 { 00:27:38.685 "dma_device_id": "system", 00:27:38.685 "dma_device_type": 1 00:27:38.685 }, 00:27:38.685 { 00:27:38.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.685 "dma_device_type": 2 00:27:38.685 } 00:27:38.685 ], 00:27:38.685 "driver_specific": {} 00:27:38.685 }' 00:27:38.685 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:38.943 14:10:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.201 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.201 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.201 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:39.201 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:39.201 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:39.459 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:39.459 "name": "BaseBdev2", 00:27:39.459 "aliases": [ 00:27:39.459 "6d8f8266-14a7-4a92-b554-1caf6bdd24e2" 00:27:39.459 ], 00:27:39.459 "product_name": "Malloc disk", 00:27:39.459 "block_size": 512, 00:27:39.459 "num_blocks": 65536, 00:27:39.459 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:39.459 "assigned_rate_limits": { 00:27:39.459 "rw_ios_per_sec": 0, 00:27:39.459 "rw_mbytes_per_sec": 0, 00:27:39.459 "r_mbytes_per_sec": 0, 00:27:39.460 "w_mbytes_per_sec": 0 00:27:39.460 }, 00:27:39.460 "claimed": true, 00:27:39.460 "claim_type": "exclusive_write", 00:27:39.460 "zoned": false, 00:27:39.460 "supported_io_types": { 00:27:39.460 "read": true, 00:27:39.460 "write": true, 00:27:39.460 "unmap": true, 00:27:39.460 "flush": true, 00:27:39.460 "reset": true, 00:27:39.460 "nvme_admin": false, 00:27:39.460 "nvme_io": false, 00:27:39.460 "nvme_io_md": false, 00:27:39.460 "write_zeroes": true, 00:27:39.460 "zcopy": true, 00:27:39.460 "get_zone_info": false, 00:27:39.460 "zone_management": false, 00:27:39.460 "zone_append": false, 00:27:39.460 "compare": false, 00:27:39.460 "compare_and_write": false, 00:27:39.460 "abort": true, 00:27:39.460 "seek_hole": false, 00:27:39.460 "seek_data": false, 00:27:39.460 "copy": true, 00:27:39.460 "nvme_iov_md": false 00:27:39.460 }, 00:27:39.460 "memory_domains": [ 00:27:39.460 { 00:27:39.460 "dma_device_id": "system", 00:27:39.460 "dma_device_type": 1 00:27:39.460 }, 00:27:39.460 { 00:27:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.460 "dma_device_type": 2 00:27:39.460 } 00:27:39.460 ], 00:27:39.460 "driver_specific": {} 00:27:39.460 }' 00:27:39.460 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.460 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:39.460 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:39.460 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.718 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:39.976 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:39.976 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:39.976 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:39.976 14:10:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:40.234 "name": "BaseBdev3", 00:27:40.234 "aliases": [ 00:27:40.234 "bc01e869-ed81-4c82-bd25-ea4d7c96110f" 00:27:40.234 ], 00:27:40.234 "product_name": "Malloc disk", 00:27:40.234 "block_size": 512, 00:27:40.234 "num_blocks": 65536, 00:27:40.234 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:40.234 "assigned_rate_limits": { 00:27:40.234 "rw_ios_per_sec": 0, 00:27:40.234 "rw_mbytes_per_sec": 0, 00:27:40.234 "r_mbytes_per_sec": 0, 00:27:40.234 "w_mbytes_per_sec": 0 00:27:40.234 }, 00:27:40.234 "claimed": true, 00:27:40.234 "claim_type": "exclusive_write", 00:27:40.234 "zoned": false, 00:27:40.234 "supported_io_types": { 00:27:40.234 "read": true, 00:27:40.234 "write": true, 00:27:40.234 "unmap": true, 00:27:40.234 "flush": true, 00:27:40.234 "reset": true, 00:27:40.234 "nvme_admin": false, 00:27:40.234 "nvme_io": false, 00:27:40.234 "nvme_io_md": false, 00:27:40.234 "write_zeroes": true, 00:27:40.234 "zcopy": true, 00:27:40.234 "get_zone_info": false, 00:27:40.234 "zone_management": false, 00:27:40.234 "zone_append": false, 00:27:40.234 "compare": false, 00:27:40.234 "compare_and_write": false, 00:27:40.234 "abort": true, 00:27:40.234 "seek_hole": false, 00:27:40.234 "seek_data": false, 00:27:40.234 "copy": true, 00:27:40.234 "nvme_iov_md": false 00:27:40.234 }, 00:27:40.234 "memory_domains": [ 00:27:40.234 { 00:27:40.234 "dma_device_id": "system", 00:27:40.234 "dma_device_type": 1 00:27:40.234 }, 00:27:40.234 { 00:27:40.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.234 "dma_device_type": 2 00:27:40.234 } 00:27:40.234 ], 00:27:40.234 "driver_specific": {} 00:27:40.234 }' 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:40.234 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:40.493 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:40.751 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:40.751 "name": "BaseBdev4", 00:27:40.751 "aliases": [ 00:27:40.751 "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9" 00:27:40.751 ], 00:27:40.751 "product_name": "Malloc disk", 00:27:40.751 "block_size": 512, 00:27:40.751 "num_blocks": 65536, 00:27:40.751 "uuid": "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9", 00:27:40.751 "assigned_rate_limits": { 00:27:40.751 "rw_ios_per_sec": 0, 00:27:40.751 "rw_mbytes_per_sec": 0, 00:27:40.751 "r_mbytes_per_sec": 0, 00:27:40.751 "w_mbytes_per_sec": 0 00:27:40.751 }, 00:27:40.751 "claimed": true, 00:27:40.751 "claim_type": "exclusive_write", 00:27:40.751 "zoned": false, 00:27:40.751 "supported_io_types": { 00:27:40.751 "read": true, 00:27:40.751 "write": true, 00:27:40.751 "unmap": true, 00:27:40.751 "flush": true, 00:27:40.751 "reset": true, 00:27:40.751 "nvme_admin": false, 00:27:40.751 "nvme_io": false, 00:27:40.751 "nvme_io_md": false, 00:27:40.751 "write_zeroes": true, 00:27:40.751 "zcopy": true, 00:27:40.751 "get_zone_info": false, 00:27:40.751 "zone_management": false, 00:27:40.751 "zone_append": false, 00:27:40.751 "compare": false, 00:27:40.751 "compare_and_write": false, 00:27:40.751 "abort": true, 00:27:40.751 "seek_hole": false, 00:27:40.751 "seek_data": false, 00:27:40.751 "copy": true, 00:27:40.751 "nvme_iov_md": false 00:27:40.751 }, 00:27:40.751 "memory_domains": [ 00:27:40.751 { 00:27:40.751 "dma_device_id": "system", 00:27:40.751 "dma_device_type": 1 00:27:40.751 }, 00:27:40.751 { 00:27:40.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.751 "dma_device_type": 2 00:27:40.751 } 00:27:40.751 ], 00:27:40.751 "driver_specific": {} 00:27:40.751 }' 00:27:40.751 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:40.751 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.009 14:10:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:41.009 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:41.009 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.267 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:41.267 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:41.267 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:41.525 [2024-07-25 14:10:30.388911] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.525 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.784 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.784 "name": "Existed_Raid", 00:27:41.784 "uuid": "e215d991-19c4-47e5-b6d3-86dfd237981d", 00:27:41.784 "strip_size_kb": 0, 00:27:41.784 "state": "online", 00:27:41.784 "raid_level": "raid1", 00:27:41.784 "superblock": false, 00:27:41.784 "num_base_bdevs": 4, 00:27:41.784 "num_base_bdevs_discovered": 3, 00:27:41.784 "num_base_bdevs_operational": 3, 00:27:41.784 "base_bdevs_list": [ 00:27:41.784 { 00:27:41.784 "name": null, 00:27:41.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.784 "is_configured": false, 00:27:41.784 "data_offset": 0, 00:27:41.784 "data_size": 65536 00:27:41.784 }, 00:27:41.784 { 00:27:41.784 "name": "BaseBdev2", 00:27:41.784 "uuid": "6d8f8266-14a7-4a92-b554-1caf6bdd24e2", 00:27:41.784 "is_configured": true, 00:27:41.784 "data_offset": 0, 00:27:41.784 "data_size": 65536 00:27:41.784 }, 00:27:41.784 { 00:27:41.784 "name": "BaseBdev3", 00:27:41.784 "uuid": "bc01e869-ed81-4c82-bd25-ea4d7c96110f", 00:27:41.784 "is_configured": true, 00:27:41.784 "data_offset": 0, 00:27:41.784 "data_size": 65536 00:27:41.784 }, 00:27:41.784 { 00:27:41.784 "name": "BaseBdev4", 00:27:41.784 "uuid": "d2e6ca62-9a82-44c0-9dea-8dfe1cdaf0a9", 00:27:41.784 "is_configured": true, 00:27:41.784 "data_offset": 0, 00:27:41.784 "data_size": 65536 00:27:41.784 } 00:27:41.784 ] 00:27:41.784 }' 00:27:41.784 14:10:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.784 14:10:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.718 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:42.718 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:42.718 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.718 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:42.719 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:42.719 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:42.719 14:10:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:42.976 [2024-07-25 14:10:31.919433] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:42.976 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:42.976 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:43.234 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.234 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:43.234 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:43.234 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:43.234 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:43.541 [2024-07-25 14:10:32.506727] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:43.800 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:43.800 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:43.800 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.800 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:44.058 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:44.058 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:44.058 14:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:44.058 [2024-07-25 14:10:33.060728] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:44.058 [2024-07-25 14:10:33.061050] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:44.336 [2024-07-25 14:10:33.144828] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:44.336 [2024-07-25 14:10:33.145057] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:44.336 [2024-07-25 14:10:33.145171] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:27:44.336 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:44.337 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:44.337 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.337 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:44.594 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:44.852 BaseBdev2 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:44.852 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.109 14:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:45.365 [ 00:27:45.365 { 00:27:45.365 "name": "BaseBdev2", 00:27:45.365 "aliases": [ 00:27:45.366 "b48bdf28-a79f-4937-a95e-70921809db9e" 00:27:45.366 ], 00:27:45.366 "product_name": "Malloc disk", 00:27:45.366 "block_size": 512, 00:27:45.366 "num_blocks": 65536, 00:27:45.366 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:45.366 "assigned_rate_limits": { 00:27:45.366 "rw_ios_per_sec": 0, 00:27:45.366 "rw_mbytes_per_sec": 0, 00:27:45.366 "r_mbytes_per_sec": 0, 00:27:45.366 "w_mbytes_per_sec": 0 00:27:45.366 }, 00:27:45.366 "claimed": false, 00:27:45.366 "zoned": false, 00:27:45.366 "supported_io_types": { 00:27:45.366 "read": true, 00:27:45.366 "write": true, 00:27:45.366 "unmap": true, 00:27:45.366 "flush": true, 00:27:45.366 "reset": true, 00:27:45.366 "nvme_admin": false, 00:27:45.366 "nvme_io": false, 00:27:45.366 "nvme_io_md": false, 00:27:45.366 "write_zeroes": true, 00:27:45.366 "zcopy": true, 00:27:45.366 "get_zone_info": false, 00:27:45.366 "zone_management": false, 00:27:45.366 "zone_append": false, 00:27:45.366 "compare": false, 00:27:45.366 "compare_and_write": false, 00:27:45.366 "abort": true, 00:27:45.366 "seek_hole": false, 00:27:45.366 "seek_data": false, 00:27:45.366 "copy": true, 00:27:45.366 "nvme_iov_md": false 00:27:45.366 }, 00:27:45.366 "memory_domains": [ 00:27:45.366 { 00:27:45.366 "dma_device_id": "system", 00:27:45.366 "dma_device_type": 1 00:27:45.366 }, 00:27:45.366 { 00:27:45.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.366 "dma_device_type": 2 00:27:45.366 } 00:27:45.366 ], 00:27:45.366 "driver_specific": {} 00:27:45.366 } 00:27:45.366 ] 00:27:45.366 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:45.366 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:45.366 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:45.366 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:45.623 BaseBdev3 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:45.623 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.882 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:46.140 [ 00:27:46.140 { 00:27:46.140 "name": "BaseBdev3", 00:27:46.140 "aliases": [ 00:27:46.140 "c091a24e-53c8-408e-b9e4-b0793e732736" 00:27:46.140 ], 00:27:46.140 "product_name": "Malloc disk", 00:27:46.140 "block_size": 512, 00:27:46.140 "num_blocks": 65536, 00:27:46.140 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:46.140 "assigned_rate_limits": { 00:27:46.140 "rw_ios_per_sec": 0, 00:27:46.140 "rw_mbytes_per_sec": 0, 00:27:46.140 "r_mbytes_per_sec": 0, 00:27:46.140 "w_mbytes_per_sec": 0 00:27:46.140 }, 00:27:46.140 "claimed": false, 00:27:46.140 "zoned": false, 00:27:46.140 "supported_io_types": { 00:27:46.140 "read": true, 00:27:46.140 "write": true, 00:27:46.140 "unmap": true, 00:27:46.140 "flush": true, 00:27:46.140 "reset": true, 00:27:46.140 "nvme_admin": false, 00:27:46.140 "nvme_io": false, 00:27:46.140 "nvme_io_md": false, 00:27:46.140 "write_zeroes": true, 00:27:46.140 "zcopy": true, 00:27:46.140 "get_zone_info": false, 00:27:46.140 "zone_management": false, 00:27:46.140 "zone_append": false, 00:27:46.140 "compare": false, 00:27:46.140 "compare_and_write": false, 00:27:46.140 "abort": true, 00:27:46.140 "seek_hole": false, 00:27:46.140 "seek_data": false, 00:27:46.140 "copy": true, 00:27:46.140 "nvme_iov_md": false 00:27:46.140 }, 00:27:46.140 "memory_domains": [ 00:27:46.140 { 00:27:46.140 "dma_device_id": "system", 00:27:46.140 "dma_device_type": 1 00:27:46.140 }, 00:27:46.140 { 00:27:46.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.140 "dma_device_type": 2 00:27:46.140 } 00:27:46.140 ], 00:27:46.140 "driver_specific": {} 00:27:46.140 } 00:27:46.140 ] 00:27:46.140 14:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:46.140 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:46.140 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:46.140 14:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:46.398 BaseBdev4 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:46.398 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:46.656 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:46.914 [ 00:27:46.914 { 00:27:46.914 "name": "BaseBdev4", 00:27:46.914 "aliases": [ 00:27:46.914 "256a80c0-01d3-4cf6-9f5c-b7a278e7091c" 00:27:46.914 ], 00:27:46.914 "product_name": "Malloc disk", 00:27:46.914 "block_size": 512, 00:27:46.914 "num_blocks": 65536, 00:27:46.914 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:46.914 "assigned_rate_limits": { 00:27:46.914 "rw_ios_per_sec": 0, 00:27:46.914 "rw_mbytes_per_sec": 0, 00:27:46.914 "r_mbytes_per_sec": 0, 00:27:46.914 "w_mbytes_per_sec": 0 00:27:46.914 }, 00:27:46.914 "claimed": false, 00:27:46.914 "zoned": false, 00:27:46.914 "supported_io_types": { 00:27:46.914 "read": true, 00:27:46.914 "write": true, 00:27:46.914 "unmap": true, 00:27:46.914 "flush": true, 00:27:46.914 "reset": true, 00:27:46.914 "nvme_admin": false, 00:27:46.914 "nvme_io": false, 00:27:46.914 "nvme_io_md": false, 00:27:46.914 "write_zeroes": true, 00:27:46.914 "zcopy": true, 00:27:46.914 "get_zone_info": false, 00:27:46.914 "zone_management": false, 00:27:46.914 "zone_append": false, 00:27:46.914 "compare": false, 00:27:46.914 "compare_and_write": false, 00:27:46.914 "abort": true, 00:27:46.914 "seek_hole": false, 00:27:46.914 "seek_data": false, 00:27:46.914 "copy": true, 00:27:46.914 "nvme_iov_md": false 00:27:46.914 }, 00:27:46.914 "memory_domains": [ 00:27:46.914 { 00:27:46.914 "dma_device_id": "system", 00:27:46.914 "dma_device_type": 1 00:27:46.914 }, 00:27:46.914 { 00:27:46.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.914 "dma_device_type": 2 00:27:46.914 } 00:27:46.914 ], 00:27:46.914 "driver_specific": {} 00:27:46.914 } 00:27:46.914 ] 00:27:46.914 14:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:46.914 14:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:46.914 14:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:46.915 14:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:47.192 [2024-07-25 14:10:35.987404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:47.192 [2024-07-25 14:10:35.987680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:47.192 [2024-07-25 14:10:35.987835] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:47.192 [2024-07-25 14:10:35.990104] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:47.192 [2024-07-25 14:10:35.990328] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.192 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.450 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.450 "name": "Existed_Raid", 00:27:47.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.450 "strip_size_kb": 0, 00:27:47.450 "state": "configuring", 00:27:47.450 "raid_level": "raid1", 00:27:47.450 "superblock": false, 00:27:47.450 "num_base_bdevs": 4, 00:27:47.450 "num_base_bdevs_discovered": 3, 00:27:47.450 "num_base_bdevs_operational": 4, 00:27:47.450 "base_bdevs_list": [ 00:27:47.450 { 00:27:47.450 "name": "BaseBdev1", 00:27:47.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.450 "is_configured": false, 00:27:47.450 "data_offset": 0, 00:27:47.450 "data_size": 0 00:27:47.451 }, 00:27:47.451 { 00:27:47.451 "name": "BaseBdev2", 00:27:47.451 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:47.451 "is_configured": true, 00:27:47.451 "data_offset": 0, 00:27:47.451 "data_size": 65536 00:27:47.451 }, 00:27:47.451 { 00:27:47.451 "name": "BaseBdev3", 00:27:47.451 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:47.451 "is_configured": true, 00:27:47.451 "data_offset": 0, 00:27:47.451 "data_size": 65536 00:27:47.451 }, 00:27:47.451 { 00:27:47.451 "name": "BaseBdev4", 00:27:47.451 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:47.451 "is_configured": true, 00:27:47.451 "data_offset": 0, 00:27:47.451 "data_size": 65536 00:27:47.451 } 00:27:47.451 ] 00:27:47.451 }' 00:27:47.451 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.451 14:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.016 14:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:48.274 [2024-07-25 14:10:37.223761] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.274 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.532 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.532 "name": "Existed_Raid", 00:27:48.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.532 "strip_size_kb": 0, 00:27:48.532 "state": "configuring", 00:27:48.532 "raid_level": "raid1", 00:27:48.532 "superblock": false, 00:27:48.532 "num_base_bdevs": 4, 00:27:48.532 "num_base_bdevs_discovered": 2, 00:27:48.532 "num_base_bdevs_operational": 4, 00:27:48.532 "base_bdevs_list": [ 00:27:48.532 { 00:27:48.532 "name": "BaseBdev1", 00:27:48.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.532 "is_configured": false, 00:27:48.532 "data_offset": 0, 00:27:48.532 "data_size": 0 00:27:48.532 }, 00:27:48.532 { 00:27:48.532 "name": null, 00:27:48.532 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:48.532 "is_configured": false, 00:27:48.532 "data_offset": 0, 00:27:48.532 "data_size": 65536 00:27:48.532 }, 00:27:48.532 { 00:27:48.532 "name": "BaseBdev3", 00:27:48.532 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:48.532 "is_configured": true, 00:27:48.532 "data_offset": 0, 00:27:48.532 "data_size": 65536 00:27:48.532 }, 00:27:48.532 { 00:27:48.532 "name": "BaseBdev4", 00:27:48.532 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:48.532 "is_configured": true, 00:27:48.532 "data_offset": 0, 00:27:48.532 "data_size": 65536 00:27:48.532 } 00:27:48.532 ] 00:27:48.532 }' 00:27:48.532 14:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.532 14:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.464 14:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.464 14:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:49.722 14:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:27:49.722 14:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:49.980 [2024-07-25 14:10:38.792628] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.980 BaseBdev1 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:49.980 14:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:50.238 14:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:50.496 [ 00:27:50.496 { 00:27:50.496 "name": "BaseBdev1", 00:27:50.496 "aliases": [ 00:27:50.496 "d943c26f-8979-4d02-8917-a4cdf744e16c" 00:27:50.496 ], 00:27:50.496 "product_name": "Malloc disk", 00:27:50.496 "block_size": 512, 00:27:50.496 "num_blocks": 65536, 00:27:50.496 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:50.496 "assigned_rate_limits": { 00:27:50.496 "rw_ios_per_sec": 0, 00:27:50.496 "rw_mbytes_per_sec": 0, 00:27:50.496 "r_mbytes_per_sec": 0, 00:27:50.496 "w_mbytes_per_sec": 0 00:27:50.496 }, 00:27:50.496 "claimed": true, 00:27:50.496 "claim_type": "exclusive_write", 00:27:50.496 "zoned": false, 00:27:50.496 "supported_io_types": { 00:27:50.496 "read": true, 00:27:50.496 "write": true, 00:27:50.496 "unmap": true, 00:27:50.496 "flush": true, 00:27:50.496 "reset": true, 00:27:50.496 "nvme_admin": false, 00:27:50.496 "nvme_io": false, 00:27:50.496 "nvme_io_md": false, 00:27:50.496 "write_zeroes": true, 00:27:50.496 "zcopy": true, 00:27:50.496 "get_zone_info": false, 00:27:50.496 "zone_management": false, 00:27:50.496 "zone_append": false, 00:27:50.496 "compare": false, 00:27:50.496 "compare_and_write": false, 00:27:50.496 "abort": true, 00:27:50.496 "seek_hole": false, 00:27:50.496 "seek_data": false, 00:27:50.496 "copy": true, 00:27:50.496 "nvme_iov_md": false 00:27:50.496 }, 00:27:50.496 "memory_domains": [ 00:27:50.496 { 00:27:50.496 "dma_device_id": "system", 00:27:50.496 "dma_device_type": 1 00:27:50.496 }, 00:27:50.496 { 00:27:50.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.496 "dma_device_type": 2 00:27:50.496 } 00:27:50.496 ], 00:27:50.496 "driver_specific": {} 00:27:50.496 } 00:27:50.496 ] 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.496 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.754 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.754 "name": "Existed_Raid", 00:27:50.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.754 "strip_size_kb": 0, 00:27:50.754 "state": "configuring", 00:27:50.754 "raid_level": "raid1", 00:27:50.754 "superblock": false, 00:27:50.754 "num_base_bdevs": 4, 00:27:50.754 "num_base_bdevs_discovered": 3, 00:27:50.754 "num_base_bdevs_operational": 4, 00:27:50.754 "base_bdevs_list": [ 00:27:50.754 { 00:27:50.754 "name": "BaseBdev1", 00:27:50.754 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:50.754 "is_configured": true, 00:27:50.754 "data_offset": 0, 00:27:50.754 "data_size": 65536 00:27:50.754 }, 00:27:50.754 { 00:27:50.754 "name": null, 00:27:50.754 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:50.754 "is_configured": false, 00:27:50.754 "data_offset": 0, 00:27:50.754 "data_size": 65536 00:27:50.754 }, 00:27:50.754 { 00:27:50.754 "name": "BaseBdev3", 00:27:50.754 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:50.754 "is_configured": true, 00:27:50.754 "data_offset": 0, 00:27:50.754 "data_size": 65536 00:27:50.754 }, 00:27:50.754 { 00:27:50.754 "name": "BaseBdev4", 00:27:50.754 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:50.754 "is_configured": true, 00:27:50.754 "data_offset": 0, 00:27:50.754 "data_size": 65536 00:27:50.754 } 00:27:50.754 ] 00:27:50.754 }' 00:27:50.754 14:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.754 14:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.319 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.319 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:51.625 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:27:51.625 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:27:51.883 [2024-07-25 14:10:40.773728] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.883 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.884 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.884 14:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.142 14:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.142 "name": "Existed_Raid", 00:27:52.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.142 "strip_size_kb": 0, 00:27:52.142 "state": "configuring", 00:27:52.142 "raid_level": "raid1", 00:27:52.142 "superblock": false, 00:27:52.142 "num_base_bdevs": 4, 00:27:52.142 "num_base_bdevs_discovered": 2, 00:27:52.142 "num_base_bdevs_operational": 4, 00:27:52.142 "base_bdevs_list": [ 00:27:52.142 { 00:27:52.142 "name": "BaseBdev1", 00:27:52.142 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:52.142 "is_configured": true, 00:27:52.142 "data_offset": 0, 00:27:52.142 "data_size": 65536 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": null, 00:27:52.142 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:52.142 "is_configured": false, 00:27:52.142 "data_offset": 0, 00:27:52.142 "data_size": 65536 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": null, 00:27:52.142 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:52.142 "is_configured": false, 00:27:52.142 "data_offset": 0, 00:27:52.142 "data_size": 65536 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": "BaseBdev4", 00:27:52.142 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:52.142 "is_configured": true, 00:27:52.142 "data_offset": 0, 00:27:52.142 "data_size": 65536 00:27:52.142 } 00:27:52.142 ] 00:27:52.142 }' 00:27:52.142 14:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.142 14:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.708 14:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:52.708 14:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:53.274 [2024-07-25 14:10:42.274519] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.274 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.838 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.838 "name": "Existed_Raid", 00:27:53.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.838 "strip_size_kb": 0, 00:27:53.838 "state": "configuring", 00:27:53.838 "raid_level": "raid1", 00:27:53.838 "superblock": false, 00:27:53.838 "num_base_bdevs": 4, 00:27:53.838 "num_base_bdevs_discovered": 3, 00:27:53.838 "num_base_bdevs_operational": 4, 00:27:53.838 "base_bdevs_list": [ 00:27:53.838 { 00:27:53.838 "name": "BaseBdev1", 00:27:53.838 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:53.838 "is_configured": true, 00:27:53.838 "data_offset": 0, 00:27:53.838 "data_size": 65536 00:27:53.838 }, 00:27:53.838 { 00:27:53.838 "name": null, 00:27:53.838 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:53.838 "is_configured": false, 00:27:53.838 "data_offset": 0, 00:27:53.838 "data_size": 65536 00:27:53.838 }, 00:27:53.838 { 00:27:53.838 "name": "BaseBdev3", 00:27:53.838 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:53.838 "is_configured": true, 00:27:53.838 "data_offset": 0, 00:27:53.838 "data_size": 65536 00:27:53.838 }, 00:27:53.838 { 00:27:53.838 "name": "BaseBdev4", 00:27:53.838 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:53.838 "is_configured": true, 00:27:53.838 "data_offset": 0, 00:27:53.838 "data_size": 65536 00:27:53.838 } 00:27:53.838 ] 00:27:53.838 }' 00:27:53.838 14:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.838 14:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.404 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.404 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:54.661 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:27:54.661 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:54.918 [2024-07-25 14:10:43.755316] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.918 14:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:55.176 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.176 "name": "Existed_Raid", 00:27:55.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.176 "strip_size_kb": 0, 00:27:55.176 "state": "configuring", 00:27:55.176 "raid_level": "raid1", 00:27:55.176 "superblock": false, 00:27:55.176 "num_base_bdevs": 4, 00:27:55.176 "num_base_bdevs_discovered": 2, 00:27:55.176 "num_base_bdevs_operational": 4, 00:27:55.176 "base_bdevs_list": [ 00:27:55.176 { 00:27:55.176 "name": null, 00:27:55.176 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:55.176 "is_configured": false, 00:27:55.176 "data_offset": 0, 00:27:55.176 "data_size": 65536 00:27:55.176 }, 00:27:55.176 { 00:27:55.176 "name": null, 00:27:55.176 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:55.176 "is_configured": false, 00:27:55.176 "data_offset": 0, 00:27:55.176 "data_size": 65536 00:27:55.176 }, 00:27:55.176 { 00:27:55.176 "name": "BaseBdev3", 00:27:55.176 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:55.176 "is_configured": true, 00:27:55.176 "data_offset": 0, 00:27:55.176 "data_size": 65536 00:27:55.176 }, 00:27:55.176 { 00:27:55.176 "name": "BaseBdev4", 00:27:55.176 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:55.176 "is_configured": true, 00:27:55.176 "data_offset": 0, 00:27:55.176 "data_size": 65536 00:27:55.176 } 00:27:55.176 ] 00:27:55.176 }' 00:27:55.176 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.176 14:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.740 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.740 14:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:56.306 [2024-07-25 14:10:45.318839] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.306 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.563 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.563 "name": "Existed_Raid", 00:27:56.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.563 "strip_size_kb": 0, 00:27:56.563 "state": "configuring", 00:27:56.563 "raid_level": "raid1", 00:27:56.563 "superblock": false, 00:27:56.563 "num_base_bdevs": 4, 00:27:56.563 "num_base_bdevs_discovered": 3, 00:27:56.563 "num_base_bdevs_operational": 4, 00:27:56.563 "base_bdevs_list": [ 00:27:56.563 { 00:27:56.563 "name": null, 00:27:56.563 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:56.563 "is_configured": false, 00:27:56.563 "data_offset": 0, 00:27:56.563 "data_size": 65536 00:27:56.563 }, 00:27:56.563 { 00:27:56.563 "name": "BaseBdev2", 00:27:56.563 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:56.563 "is_configured": true, 00:27:56.563 "data_offset": 0, 00:27:56.563 "data_size": 65536 00:27:56.563 }, 00:27:56.563 { 00:27:56.563 "name": "BaseBdev3", 00:27:56.563 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:56.563 "is_configured": true, 00:27:56.563 "data_offset": 0, 00:27:56.563 "data_size": 65536 00:27:56.563 }, 00:27:56.563 { 00:27:56.563 "name": "BaseBdev4", 00:27:56.563 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:56.563 "is_configured": true, 00:27:56.563 "data_offset": 0, 00:27:56.563 "data_size": 65536 00:27:56.563 } 00:27:56.563 ] 00:27:56.563 }' 00:27:56.563 14:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.563 14:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.497 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.497 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:57.764 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:27:57.764 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.764 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:58.025 14:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d943c26f-8979-4d02-8917-a4cdf744e16c 00:27:58.282 [2024-07-25 14:10:47.106227] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:58.282 [2024-07-25 14:10:47.106773] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:27:58.282 [2024-07-25 14:10:47.106992] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:58.282 [2024-07-25 14:10:47.107317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:58.282 [2024-07-25 14:10:47.107938] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:27:58.282 [2024-07-25 14:10:47.108149] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:27:58.282 [2024-07-25 14:10:47.108620] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.282 NewBaseBdev 00:27:58.282 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:27:58.282 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:27:58.282 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:58.282 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:58.283 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:58.283 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:58.283 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:58.540 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:58.798 [ 00:27:58.798 { 00:27:58.798 "name": "NewBaseBdev", 00:27:58.798 "aliases": [ 00:27:58.798 "d943c26f-8979-4d02-8917-a4cdf744e16c" 00:27:58.798 ], 00:27:58.798 "product_name": "Malloc disk", 00:27:58.798 "block_size": 512, 00:27:58.798 "num_blocks": 65536, 00:27:58.798 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:58.798 "assigned_rate_limits": { 00:27:58.798 "rw_ios_per_sec": 0, 00:27:58.798 "rw_mbytes_per_sec": 0, 00:27:58.798 "r_mbytes_per_sec": 0, 00:27:58.798 "w_mbytes_per_sec": 0 00:27:58.798 }, 00:27:58.798 "claimed": true, 00:27:58.798 "claim_type": "exclusive_write", 00:27:58.798 "zoned": false, 00:27:58.798 "supported_io_types": { 00:27:58.798 "read": true, 00:27:58.798 "write": true, 00:27:58.798 "unmap": true, 00:27:58.798 "flush": true, 00:27:58.798 "reset": true, 00:27:58.798 "nvme_admin": false, 00:27:58.798 "nvme_io": false, 00:27:58.798 "nvme_io_md": false, 00:27:58.798 "write_zeroes": true, 00:27:58.798 "zcopy": true, 00:27:58.798 "get_zone_info": false, 00:27:58.798 "zone_management": false, 00:27:58.798 "zone_append": false, 00:27:58.798 "compare": false, 00:27:58.798 "compare_and_write": false, 00:27:58.798 "abort": true, 00:27:58.798 "seek_hole": false, 00:27:58.798 "seek_data": false, 00:27:58.798 "copy": true, 00:27:58.798 "nvme_iov_md": false 00:27:58.798 }, 00:27:58.798 "memory_domains": [ 00:27:58.798 { 00:27:58.798 "dma_device_id": "system", 00:27:58.798 "dma_device_type": 1 00:27:58.798 }, 00:27:58.798 { 00:27:58.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.798 "dma_device_type": 2 00:27:58.798 } 00:27:58.798 ], 00:27:58.798 "driver_specific": {} 00:27:58.798 } 00:27:58.798 ] 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.798 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.106 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.106 "name": "Existed_Raid", 00:27:59.106 "uuid": "845e1c10-af5b-496a-9b90-d6d360311609", 00:27:59.106 "strip_size_kb": 0, 00:27:59.106 "state": "online", 00:27:59.106 "raid_level": "raid1", 00:27:59.106 "superblock": false, 00:27:59.106 "num_base_bdevs": 4, 00:27:59.106 "num_base_bdevs_discovered": 4, 00:27:59.106 "num_base_bdevs_operational": 4, 00:27:59.106 "base_bdevs_list": [ 00:27:59.106 { 00:27:59.106 "name": "NewBaseBdev", 00:27:59.106 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:59.106 "is_configured": true, 00:27:59.106 "data_offset": 0, 00:27:59.106 "data_size": 65536 00:27:59.106 }, 00:27:59.106 { 00:27:59.106 "name": "BaseBdev2", 00:27:59.106 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:59.106 "is_configured": true, 00:27:59.106 "data_offset": 0, 00:27:59.106 "data_size": 65536 00:27:59.106 }, 00:27:59.106 { 00:27:59.106 "name": "BaseBdev3", 00:27:59.106 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:59.106 "is_configured": true, 00:27:59.106 "data_offset": 0, 00:27:59.106 "data_size": 65536 00:27:59.106 }, 00:27:59.106 { 00:27:59.106 "name": "BaseBdev4", 00:27:59.106 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:59.106 "is_configured": true, 00:27:59.106 "data_offset": 0, 00:27:59.106 "data_size": 65536 00:27:59.106 } 00:27:59.106 ] 00:27:59.106 }' 00:27:59.106 14:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.106 14:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:59.672 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:59.930 [2024-07-25 14:10:48.727044] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:59.930 "name": "Existed_Raid", 00:27:59.930 "aliases": [ 00:27:59.930 "845e1c10-af5b-496a-9b90-d6d360311609" 00:27:59.930 ], 00:27:59.930 "product_name": "Raid Volume", 00:27:59.930 "block_size": 512, 00:27:59.930 "num_blocks": 65536, 00:27:59.930 "uuid": "845e1c10-af5b-496a-9b90-d6d360311609", 00:27:59.930 "assigned_rate_limits": { 00:27:59.930 "rw_ios_per_sec": 0, 00:27:59.930 "rw_mbytes_per_sec": 0, 00:27:59.930 "r_mbytes_per_sec": 0, 00:27:59.930 "w_mbytes_per_sec": 0 00:27:59.930 }, 00:27:59.930 "claimed": false, 00:27:59.930 "zoned": false, 00:27:59.930 "supported_io_types": { 00:27:59.930 "read": true, 00:27:59.930 "write": true, 00:27:59.930 "unmap": false, 00:27:59.930 "flush": false, 00:27:59.930 "reset": true, 00:27:59.930 "nvme_admin": false, 00:27:59.930 "nvme_io": false, 00:27:59.930 "nvme_io_md": false, 00:27:59.930 "write_zeroes": true, 00:27:59.930 "zcopy": false, 00:27:59.930 "get_zone_info": false, 00:27:59.930 "zone_management": false, 00:27:59.930 "zone_append": false, 00:27:59.930 "compare": false, 00:27:59.930 "compare_and_write": false, 00:27:59.930 "abort": false, 00:27:59.930 "seek_hole": false, 00:27:59.930 "seek_data": false, 00:27:59.930 "copy": false, 00:27:59.930 "nvme_iov_md": false 00:27:59.930 }, 00:27:59.930 "memory_domains": [ 00:27:59.930 { 00:27:59.930 "dma_device_id": "system", 00:27:59.930 "dma_device_type": 1 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.930 "dma_device_type": 2 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "system", 00:27:59.930 "dma_device_type": 1 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.930 "dma_device_type": 2 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "system", 00:27:59.930 "dma_device_type": 1 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.930 "dma_device_type": 2 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "system", 00:27:59.930 "dma_device_type": 1 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.930 "dma_device_type": 2 00:27:59.930 } 00:27:59.930 ], 00:27:59.930 "driver_specific": { 00:27:59.930 "raid": { 00:27:59.930 "uuid": "845e1c10-af5b-496a-9b90-d6d360311609", 00:27:59.930 "strip_size_kb": 0, 00:27:59.930 "state": "online", 00:27:59.930 "raid_level": "raid1", 00:27:59.930 "superblock": false, 00:27:59.930 "num_base_bdevs": 4, 00:27:59.930 "num_base_bdevs_discovered": 4, 00:27:59.930 "num_base_bdevs_operational": 4, 00:27:59.930 "base_bdevs_list": [ 00:27:59.930 { 00:27:59.930 "name": "NewBaseBdev", 00:27:59.930 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:27:59.930 "is_configured": true, 00:27:59.930 "data_offset": 0, 00:27:59.930 "data_size": 65536 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "name": "BaseBdev2", 00:27:59.930 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:27:59.930 "is_configured": true, 00:27:59.930 "data_offset": 0, 00:27:59.930 "data_size": 65536 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "name": "BaseBdev3", 00:27:59.930 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:27:59.930 "is_configured": true, 00:27:59.930 "data_offset": 0, 00:27:59.930 "data_size": 65536 00:27:59.930 }, 00:27:59.930 { 00:27:59.930 "name": "BaseBdev4", 00:27:59.930 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:27:59.930 "is_configured": true, 00:27:59.930 "data_offset": 0, 00:27:59.930 "data_size": 65536 00:27:59.930 } 00:27:59.930 ] 00:27:59.930 } 00:27:59.930 } 00:27:59.930 }' 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:27:59.930 BaseBdev2 00:27:59.930 BaseBdev3 00:27:59.930 BaseBdev4' 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:27:59.930 14:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:00.188 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:00.188 "name": "NewBaseBdev", 00:28:00.188 "aliases": [ 00:28:00.188 "d943c26f-8979-4d02-8917-a4cdf744e16c" 00:28:00.188 ], 00:28:00.188 "product_name": "Malloc disk", 00:28:00.188 "block_size": 512, 00:28:00.188 "num_blocks": 65536, 00:28:00.188 "uuid": "d943c26f-8979-4d02-8917-a4cdf744e16c", 00:28:00.188 "assigned_rate_limits": { 00:28:00.188 "rw_ios_per_sec": 0, 00:28:00.188 "rw_mbytes_per_sec": 0, 00:28:00.188 "r_mbytes_per_sec": 0, 00:28:00.188 "w_mbytes_per_sec": 0 00:28:00.188 }, 00:28:00.188 "claimed": true, 00:28:00.188 "claim_type": "exclusive_write", 00:28:00.188 "zoned": false, 00:28:00.188 "supported_io_types": { 00:28:00.188 "read": true, 00:28:00.188 "write": true, 00:28:00.188 "unmap": true, 00:28:00.188 "flush": true, 00:28:00.188 "reset": true, 00:28:00.188 "nvme_admin": false, 00:28:00.188 "nvme_io": false, 00:28:00.188 "nvme_io_md": false, 00:28:00.188 "write_zeroes": true, 00:28:00.188 "zcopy": true, 00:28:00.188 "get_zone_info": false, 00:28:00.188 "zone_management": false, 00:28:00.188 "zone_append": false, 00:28:00.188 "compare": false, 00:28:00.188 "compare_and_write": false, 00:28:00.188 "abort": true, 00:28:00.188 "seek_hole": false, 00:28:00.188 "seek_data": false, 00:28:00.188 "copy": true, 00:28:00.188 "nvme_iov_md": false 00:28:00.189 }, 00:28:00.189 "memory_domains": [ 00:28:00.189 { 00:28:00.189 "dma_device_id": "system", 00:28:00.189 "dma_device_type": 1 00:28:00.189 }, 00:28:00.189 { 00:28:00.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:00.189 "dma_device_type": 2 00:28:00.189 } 00:28:00.189 ], 00:28:00.189 "driver_specific": {} 00:28:00.189 }' 00:28:00.189 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:00.189 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:00.189 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:00.189 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:00.447 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:00.704 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:00.704 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:00.704 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:00.704 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:00.961 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:00.961 "name": "BaseBdev2", 00:28:00.961 "aliases": [ 00:28:00.962 "b48bdf28-a79f-4937-a95e-70921809db9e" 00:28:00.962 ], 00:28:00.962 "product_name": "Malloc disk", 00:28:00.962 "block_size": 512, 00:28:00.962 "num_blocks": 65536, 00:28:00.962 "uuid": "b48bdf28-a79f-4937-a95e-70921809db9e", 00:28:00.962 "assigned_rate_limits": { 00:28:00.962 "rw_ios_per_sec": 0, 00:28:00.962 "rw_mbytes_per_sec": 0, 00:28:00.962 "r_mbytes_per_sec": 0, 00:28:00.962 "w_mbytes_per_sec": 0 00:28:00.962 }, 00:28:00.962 "claimed": true, 00:28:00.962 "claim_type": "exclusive_write", 00:28:00.962 "zoned": false, 00:28:00.962 "supported_io_types": { 00:28:00.962 "read": true, 00:28:00.962 "write": true, 00:28:00.962 "unmap": true, 00:28:00.962 "flush": true, 00:28:00.962 "reset": true, 00:28:00.962 "nvme_admin": false, 00:28:00.962 "nvme_io": false, 00:28:00.962 "nvme_io_md": false, 00:28:00.962 "write_zeroes": true, 00:28:00.962 "zcopy": true, 00:28:00.962 "get_zone_info": false, 00:28:00.962 "zone_management": false, 00:28:00.962 "zone_append": false, 00:28:00.962 "compare": false, 00:28:00.962 "compare_and_write": false, 00:28:00.962 "abort": true, 00:28:00.962 "seek_hole": false, 00:28:00.962 "seek_data": false, 00:28:00.962 "copy": true, 00:28:00.962 "nvme_iov_md": false 00:28:00.962 }, 00:28:00.962 "memory_domains": [ 00:28:00.962 { 00:28:00.962 "dma_device_id": "system", 00:28:00.962 "dma_device_type": 1 00:28:00.962 }, 00:28:00.962 { 00:28:00.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:00.962 "dma_device_type": 2 00:28:00.962 } 00:28:00.962 ], 00:28:00.962 "driver_specific": {} 00:28:00.962 }' 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:00.962 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:01.220 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:01.478 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:01.478 "name": "BaseBdev3", 00:28:01.478 "aliases": [ 00:28:01.478 "c091a24e-53c8-408e-b9e4-b0793e732736" 00:28:01.478 ], 00:28:01.478 "product_name": "Malloc disk", 00:28:01.478 "block_size": 512, 00:28:01.478 "num_blocks": 65536, 00:28:01.478 "uuid": "c091a24e-53c8-408e-b9e4-b0793e732736", 00:28:01.478 "assigned_rate_limits": { 00:28:01.478 "rw_ios_per_sec": 0, 00:28:01.478 "rw_mbytes_per_sec": 0, 00:28:01.478 "r_mbytes_per_sec": 0, 00:28:01.478 "w_mbytes_per_sec": 0 00:28:01.478 }, 00:28:01.478 "claimed": true, 00:28:01.478 "claim_type": "exclusive_write", 00:28:01.478 "zoned": false, 00:28:01.478 "supported_io_types": { 00:28:01.478 "read": true, 00:28:01.478 "write": true, 00:28:01.478 "unmap": true, 00:28:01.478 "flush": true, 00:28:01.478 "reset": true, 00:28:01.479 "nvme_admin": false, 00:28:01.479 "nvme_io": false, 00:28:01.479 "nvme_io_md": false, 00:28:01.479 "write_zeroes": true, 00:28:01.479 "zcopy": true, 00:28:01.479 "get_zone_info": false, 00:28:01.479 "zone_management": false, 00:28:01.479 "zone_append": false, 00:28:01.479 "compare": false, 00:28:01.479 "compare_and_write": false, 00:28:01.479 "abort": true, 00:28:01.479 "seek_hole": false, 00:28:01.479 "seek_data": false, 00:28:01.479 "copy": true, 00:28:01.479 "nvme_iov_md": false 00:28:01.479 }, 00:28:01.479 "memory_domains": [ 00:28:01.479 { 00:28:01.479 "dma_device_id": "system", 00:28:01.479 "dma_device_type": 1 00:28:01.479 }, 00:28:01.479 { 00:28:01.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.479 "dma_device_type": 2 00:28:01.479 } 00:28:01.479 ], 00:28:01.479 "driver_specific": {} 00:28:01.479 }' 00:28:01.479 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:01.479 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:01.736 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:01.994 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:01.994 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:01.994 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:01.994 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:02.252 "name": "BaseBdev4", 00:28:02.252 "aliases": [ 00:28:02.252 "256a80c0-01d3-4cf6-9f5c-b7a278e7091c" 00:28:02.252 ], 00:28:02.252 "product_name": "Malloc disk", 00:28:02.252 "block_size": 512, 00:28:02.252 "num_blocks": 65536, 00:28:02.252 "uuid": "256a80c0-01d3-4cf6-9f5c-b7a278e7091c", 00:28:02.252 "assigned_rate_limits": { 00:28:02.252 "rw_ios_per_sec": 0, 00:28:02.252 "rw_mbytes_per_sec": 0, 00:28:02.252 "r_mbytes_per_sec": 0, 00:28:02.252 "w_mbytes_per_sec": 0 00:28:02.252 }, 00:28:02.252 "claimed": true, 00:28:02.252 "claim_type": "exclusive_write", 00:28:02.252 "zoned": false, 00:28:02.252 "supported_io_types": { 00:28:02.252 "read": true, 00:28:02.252 "write": true, 00:28:02.252 "unmap": true, 00:28:02.252 "flush": true, 00:28:02.252 "reset": true, 00:28:02.252 "nvme_admin": false, 00:28:02.252 "nvme_io": false, 00:28:02.252 "nvme_io_md": false, 00:28:02.252 "write_zeroes": true, 00:28:02.252 "zcopy": true, 00:28:02.252 "get_zone_info": false, 00:28:02.252 "zone_management": false, 00:28:02.252 "zone_append": false, 00:28:02.252 "compare": false, 00:28:02.252 "compare_and_write": false, 00:28:02.252 "abort": true, 00:28:02.252 "seek_hole": false, 00:28:02.252 "seek_data": false, 00:28:02.252 "copy": true, 00:28:02.252 "nvme_iov_md": false 00:28:02.252 }, 00:28:02.252 "memory_domains": [ 00:28:02.252 { 00:28:02.252 "dma_device_id": "system", 00:28:02.252 "dma_device_type": 1 00:28:02.252 }, 00:28:02.252 { 00:28:02.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.252 "dma_device_type": 2 00:28:02.252 } 00:28:02.252 ], 00:28:02.252 "driver_specific": {} 00:28:02.252 }' 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.252 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:02.510 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:02.768 [2024-07-25 14:10:51.699704] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:02.768 [2024-07-25 14:10:51.700067] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.768 [2024-07-25 14:10:51.700431] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.768 [2024-07-25 14:10:51.701047] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:02.768 [2024-07-25 14:10:51.701237] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 140732 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 140732 ']' 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 140732 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140732 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140732' 00:28:02.768 killing process with pid 140732 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 140732 00:28:02.768 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 140732 00:28:02.768 [2024-07-25 14:10:51.738700] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:03.060 [2024-07-25 14:10:52.030571] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:04.442 ************************************ 00:28:04.442 END TEST raid_state_function_test 00:28:04.442 ************************************ 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:04.442 00:28:04.442 real 0m37.648s 00:28:04.442 user 1m10.024s 00:28:04.442 sys 0m4.422s 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.442 14:10:53 bdev_raid -- bdev/bdev_raid.sh@1022 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:04.442 14:10:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:04.442 14:10:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.442 14:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:04.442 ************************************ 00:28:04.442 START TEST raid_state_function_test_sb 00:28:04.442 ************************************ 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=141870 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141870' 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:04.442 Process raid pid: 141870 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 141870 /var/tmp/spdk-raid.sock 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 141870 ']' 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:04.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.442 14:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.442 [2024-07-25 14:10:53.221404] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:28:04.442 [2024-07-25 14:10:53.221969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.442 [2024-07-25 14:10:53.393155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.700 [2024-07-25 14:10:53.636626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.958 [2024-07-25 14:10:53.827921] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.216 14:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.216 14:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:28:05.216 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:05.474 [2024-07-25 14:10:54.423951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:05.474 [2024-07-25 14:10:54.424271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:05.474 [2024-07-25 14:10:54.424405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:05.474 [2024-07-25 14:10:54.424567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:05.474 [2024-07-25 14:10:54.424719] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:05.474 [2024-07-25 14:10:54.424809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:05.474 [2024-07-25 14:10:54.424935] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:05.474 [2024-07-25 14:10:54.425013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.474 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.731 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.731 "name": "Existed_Raid", 00:28:05.731 "uuid": "e60a4b03-6d25-454a-b96f-0b8575e2773f", 00:28:05.732 "strip_size_kb": 0, 00:28:05.732 "state": "configuring", 00:28:05.732 "raid_level": "raid1", 00:28:05.732 "superblock": true, 00:28:05.732 "num_base_bdevs": 4, 00:28:05.732 "num_base_bdevs_discovered": 0, 00:28:05.732 "num_base_bdevs_operational": 4, 00:28:05.732 "base_bdevs_list": [ 00:28:05.732 { 00:28:05.732 "name": "BaseBdev1", 00:28:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.732 "is_configured": false, 00:28:05.732 "data_offset": 0, 00:28:05.732 "data_size": 0 00:28:05.732 }, 00:28:05.732 { 00:28:05.732 "name": "BaseBdev2", 00:28:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.732 "is_configured": false, 00:28:05.732 "data_offset": 0, 00:28:05.732 "data_size": 0 00:28:05.732 }, 00:28:05.732 { 00:28:05.732 "name": "BaseBdev3", 00:28:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.732 "is_configured": false, 00:28:05.732 "data_offset": 0, 00:28:05.732 "data_size": 0 00:28:05.732 }, 00:28:05.732 { 00:28:05.732 "name": "BaseBdev4", 00:28:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.732 "is_configured": false, 00:28:05.732 "data_offset": 0, 00:28:05.732 "data_size": 0 00:28:05.732 } 00:28:05.732 ] 00:28:05.732 }' 00:28:05.732 14:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.732 14:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.665 14:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:06.665 [2024-07-25 14:10:55.580092] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:06.665 [2024-07-25 14:10:55.580315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:28:06.665 14:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:06.923 [2024-07-25 14:10:55.872155] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:06.923 [2024-07-25 14:10:55.872412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:06.923 [2024-07-25 14:10:55.872544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:06.923 [2024-07-25 14:10:55.872632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:06.923 [2024-07-25 14:10:55.872734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:06.923 [2024-07-25 14:10:55.872925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:06.923 [2024-07-25 14:10:55.873027] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:06.923 [2024-07-25 14:10:55.873090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:06.923 14:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:07.181 [2024-07-25 14:10:56.172971] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.181 BaseBdev1 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:07.181 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:07.440 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:07.697 [ 00:28:07.697 { 00:28:07.697 "name": "BaseBdev1", 00:28:07.697 "aliases": [ 00:28:07.697 "a7e5a335-fa27-4879-913d-fd82a93c6261" 00:28:07.697 ], 00:28:07.697 "product_name": "Malloc disk", 00:28:07.697 "block_size": 512, 00:28:07.697 "num_blocks": 65536, 00:28:07.697 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:07.697 "assigned_rate_limits": { 00:28:07.697 "rw_ios_per_sec": 0, 00:28:07.697 "rw_mbytes_per_sec": 0, 00:28:07.697 "r_mbytes_per_sec": 0, 00:28:07.697 "w_mbytes_per_sec": 0 00:28:07.697 }, 00:28:07.697 "claimed": true, 00:28:07.697 "claim_type": "exclusive_write", 00:28:07.697 "zoned": false, 00:28:07.697 "supported_io_types": { 00:28:07.697 "read": true, 00:28:07.697 "write": true, 00:28:07.697 "unmap": true, 00:28:07.697 "flush": true, 00:28:07.697 "reset": true, 00:28:07.697 "nvme_admin": false, 00:28:07.697 "nvme_io": false, 00:28:07.697 "nvme_io_md": false, 00:28:07.697 "write_zeroes": true, 00:28:07.697 "zcopy": true, 00:28:07.697 "get_zone_info": false, 00:28:07.697 "zone_management": false, 00:28:07.697 "zone_append": false, 00:28:07.697 "compare": false, 00:28:07.697 "compare_and_write": false, 00:28:07.697 "abort": true, 00:28:07.697 "seek_hole": false, 00:28:07.697 "seek_data": false, 00:28:07.697 "copy": true, 00:28:07.697 "nvme_iov_md": false 00:28:07.697 }, 00:28:07.697 "memory_domains": [ 00:28:07.697 { 00:28:07.698 "dma_device_id": "system", 00:28:07.698 "dma_device_type": 1 00:28:07.698 }, 00:28:07.698 { 00:28:07.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.698 "dma_device_type": 2 00:28:07.698 } 00:28:07.698 ], 00:28:07.698 "driver_specific": {} 00:28:07.698 } 00:28:07.698 ] 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.698 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.956 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.956 "name": "Existed_Raid", 00:28:07.956 "uuid": "fe4e2c59-13b5-4497-8f03-b8b303c11872", 00:28:07.956 "strip_size_kb": 0, 00:28:07.956 "state": "configuring", 00:28:07.956 "raid_level": "raid1", 00:28:07.956 "superblock": true, 00:28:07.956 "num_base_bdevs": 4, 00:28:07.956 "num_base_bdevs_discovered": 1, 00:28:07.956 "num_base_bdevs_operational": 4, 00:28:07.956 "base_bdevs_list": [ 00:28:07.956 { 00:28:07.956 "name": "BaseBdev1", 00:28:07.956 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:07.956 "is_configured": true, 00:28:07.956 "data_offset": 2048, 00:28:07.956 "data_size": 63488 00:28:07.956 }, 00:28:07.956 { 00:28:07.956 "name": "BaseBdev2", 00:28:07.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.956 "is_configured": false, 00:28:07.956 "data_offset": 0, 00:28:07.956 "data_size": 0 00:28:07.956 }, 00:28:07.956 { 00:28:07.956 "name": "BaseBdev3", 00:28:07.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.956 "is_configured": false, 00:28:07.956 "data_offset": 0, 00:28:07.956 "data_size": 0 00:28:07.956 }, 00:28:07.956 { 00:28:07.956 "name": "BaseBdev4", 00:28:07.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.956 "is_configured": false, 00:28:07.956 "data_offset": 0, 00:28:07.956 "data_size": 0 00:28:07.956 } 00:28:07.956 ] 00:28:07.956 }' 00:28:07.956 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.956 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.522 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:08.822 [2024-07-25 14:10:57.829418] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:08.822 [2024-07-25 14:10:57.829725] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:28:08.822 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:09.080 [2024-07-25 14:10:58.085501] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:09.080 [2024-07-25 14:10:58.087800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:09.080 [2024-07-25 14:10:58.088026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:09.080 [2024-07-25 14:10:58.088160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:09.080 [2024-07-25 14:10:58.088229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:09.080 [2024-07-25 14:10:58.088342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:09.080 [2024-07-25 14:10:58.088408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.080 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.339 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.339 "name": "Existed_Raid", 00:28:09.339 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:09.339 "strip_size_kb": 0, 00:28:09.339 "state": "configuring", 00:28:09.339 "raid_level": "raid1", 00:28:09.339 "superblock": true, 00:28:09.339 "num_base_bdevs": 4, 00:28:09.339 "num_base_bdevs_discovered": 1, 00:28:09.339 "num_base_bdevs_operational": 4, 00:28:09.339 "base_bdevs_list": [ 00:28:09.339 { 00:28:09.339 "name": "BaseBdev1", 00:28:09.339 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:09.339 "is_configured": true, 00:28:09.339 "data_offset": 2048, 00:28:09.339 "data_size": 63488 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": "BaseBdev2", 00:28:09.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.339 "is_configured": false, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 0 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": "BaseBdev3", 00:28:09.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.339 "is_configured": false, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 0 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": "BaseBdev4", 00:28:09.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.339 "is_configured": false, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 0 00:28:09.339 } 00:28:09.339 ] 00:28:09.339 }' 00:28:09.339 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.339 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:10.273 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:10.273 [2024-07-25 14:10:59.307891] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.273 BaseBdev2 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:10.531 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.789 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:11.046 [ 00:28:11.046 { 00:28:11.046 "name": "BaseBdev2", 00:28:11.046 "aliases": [ 00:28:11.046 "a9f6c4c8-cb27-4a93-89de-03bb7323a492" 00:28:11.046 ], 00:28:11.046 "product_name": "Malloc disk", 00:28:11.046 "block_size": 512, 00:28:11.046 "num_blocks": 65536, 00:28:11.046 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:11.046 "assigned_rate_limits": { 00:28:11.046 "rw_ios_per_sec": 0, 00:28:11.046 "rw_mbytes_per_sec": 0, 00:28:11.046 "r_mbytes_per_sec": 0, 00:28:11.046 "w_mbytes_per_sec": 0 00:28:11.046 }, 00:28:11.046 "claimed": true, 00:28:11.046 "claim_type": "exclusive_write", 00:28:11.046 "zoned": false, 00:28:11.046 "supported_io_types": { 00:28:11.046 "read": true, 00:28:11.046 "write": true, 00:28:11.046 "unmap": true, 00:28:11.046 "flush": true, 00:28:11.046 "reset": true, 00:28:11.046 "nvme_admin": false, 00:28:11.046 "nvme_io": false, 00:28:11.046 "nvme_io_md": false, 00:28:11.046 "write_zeroes": true, 00:28:11.046 "zcopy": true, 00:28:11.046 "get_zone_info": false, 00:28:11.046 "zone_management": false, 00:28:11.046 "zone_append": false, 00:28:11.046 "compare": false, 00:28:11.046 "compare_and_write": false, 00:28:11.046 "abort": true, 00:28:11.046 "seek_hole": false, 00:28:11.046 "seek_data": false, 00:28:11.046 "copy": true, 00:28:11.046 "nvme_iov_md": false 00:28:11.046 }, 00:28:11.046 "memory_domains": [ 00:28:11.046 { 00:28:11.046 "dma_device_id": "system", 00:28:11.046 "dma_device_type": 1 00:28:11.046 }, 00:28:11.046 { 00:28:11.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.046 "dma_device_type": 2 00:28:11.046 } 00:28:11.047 ], 00:28:11.047 "driver_specific": {} 00:28:11.047 } 00:28:11.047 ] 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.047 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.305 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:11.305 "name": "Existed_Raid", 00:28:11.305 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:11.305 "strip_size_kb": 0, 00:28:11.305 "state": "configuring", 00:28:11.305 "raid_level": "raid1", 00:28:11.305 "superblock": true, 00:28:11.305 "num_base_bdevs": 4, 00:28:11.305 "num_base_bdevs_discovered": 2, 00:28:11.305 "num_base_bdevs_operational": 4, 00:28:11.305 "base_bdevs_list": [ 00:28:11.305 { 00:28:11.305 "name": "BaseBdev1", 00:28:11.305 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:11.305 "is_configured": true, 00:28:11.305 "data_offset": 2048, 00:28:11.305 "data_size": 63488 00:28:11.305 }, 00:28:11.305 { 00:28:11.305 "name": "BaseBdev2", 00:28:11.305 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:11.305 "is_configured": true, 00:28:11.305 "data_offset": 2048, 00:28:11.305 "data_size": 63488 00:28:11.305 }, 00:28:11.305 { 00:28:11.305 "name": "BaseBdev3", 00:28:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.305 "is_configured": false, 00:28:11.305 "data_offset": 0, 00:28:11.305 "data_size": 0 00:28:11.305 }, 00:28:11.305 { 00:28:11.305 "name": "BaseBdev4", 00:28:11.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.305 "is_configured": false, 00:28:11.305 "data_offset": 0, 00:28:11.305 "data_size": 0 00:28:11.305 } 00:28:11.305 ] 00:28:11.305 }' 00:28:11.305 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:11.305 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.870 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:12.129 [2024-07-25 14:11:00.983860] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:12.129 BaseBdev3 00:28:12.129 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:12.129 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:12.386 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:12.644 [ 00:28:12.644 { 00:28:12.644 "name": "BaseBdev3", 00:28:12.644 "aliases": [ 00:28:12.644 "ee378a41-4ff0-420b-a543-83f4e6e65079" 00:28:12.644 ], 00:28:12.644 "product_name": "Malloc disk", 00:28:12.644 "block_size": 512, 00:28:12.644 "num_blocks": 65536, 00:28:12.644 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:12.644 "assigned_rate_limits": { 00:28:12.644 "rw_ios_per_sec": 0, 00:28:12.644 "rw_mbytes_per_sec": 0, 00:28:12.644 "r_mbytes_per_sec": 0, 00:28:12.644 "w_mbytes_per_sec": 0 00:28:12.644 }, 00:28:12.644 "claimed": true, 00:28:12.644 "claim_type": "exclusive_write", 00:28:12.644 "zoned": false, 00:28:12.644 "supported_io_types": { 00:28:12.644 "read": true, 00:28:12.644 "write": true, 00:28:12.644 "unmap": true, 00:28:12.644 "flush": true, 00:28:12.644 "reset": true, 00:28:12.644 "nvme_admin": false, 00:28:12.644 "nvme_io": false, 00:28:12.644 "nvme_io_md": false, 00:28:12.644 "write_zeroes": true, 00:28:12.644 "zcopy": true, 00:28:12.644 "get_zone_info": false, 00:28:12.644 "zone_management": false, 00:28:12.644 "zone_append": false, 00:28:12.644 "compare": false, 00:28:12.644 "compare_and_write": false, 00:28:12.644 "abort": true, 00:28:12.644 "seek_hole": false, 00:28:12.644 "seek_data": false, 00:28:12.644 "copy": true, 00:28:12.644 "nvme_iov_md": false 00:28:12.644 }, 00:28:12.644 "memory_domains": [ 00:28:12.644 { 00:28:12.644 "dma_device_id": "system", 00:28:12.644 "dma_device_type": 1 00:28:12.644 }, 00:28:12.644 { 00:28:12.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.644 "dma_device_type": 2 00:28:12.644 } 00:28:12.644 ], 00:28:12.644 "driver_specific": {} 00:28:12.644 } 00:28:12.644 ] 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.644 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.901 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.901 "name": "Existed_Raid", 00:28:12.901 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:12.901 "strip_size_kb": 0, 00:28:12.901 "state": "configuring", 00:28:12.901 "raid_level": "raid1", 00:28:12.901 "superblock": true, 00:28:12.901 "num_base_bdevs": 4, 00:28:12.901 "num_base_bdevs_discovered": 3, 00:28:12.901 "num_base_bdevs_operational": 4, 00:28:12.901 "base_bdevs_list": [ 00:28:12.901 { 00:28:12.901 "name": "BaseBdev1", 00:28:12.901 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:12.901 "is_configured": true, 00:28:12.901 "data_offset": 2048, 00:28:12.901 "data_size": 63488 00:28:12.901 }, 00:28:12.901 { 00:28:12.901 "name": "BaseBdev2", 00:28:12.901 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:12.901 "is_configured": true, 00:28:12.901 "data_offset": 2048, 00:28:12.901 "data_size": 63488 00:28:12.901 }, 00:28:12.901 { 00:28:12.901 "name": "BaseBdev3", 00:28:12.901 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:12.901 "is_configured": true, 00:28:12.901 "data_offset": 2048, 00:28:12.901 "data_size": 63488 00:28:12.901 }, 00:28:12.901 { 00:28:12.901 "name": "BaseBdev4", 00:28:12.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.901 "is_configured": false, 00:28:12.901 "data_offset": 0, 00:28:12.901 "data_size": 0 00:28:12.901 } 00:28:12.901 ] 00:28:12.901 }' 00:28:12.901 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.901 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.467 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:13.725 [2024-07-25 14:11:02.705631] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:13.725 [2024-07-25 14:11:02.706154] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:28:13.725 [2024-07-25 14:11:02.706321] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:13.725 [2024-07-25 14:11:02.706499] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:13.725 BaseBdev4 00:28:13.725 [2024-07-25 14:11:02.706999] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:28:13.725 [2024-07-25 14:11:02.707016] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:28:13.725 [2024-07-25 14:11:02.707195] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:13.725 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:13.984 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:14.241 [ 00:28:14.241 { 00:28:14.241 "name": "BaseBdev4", 00:28:14.241 "aliases": [ 00:28:14.241 "3ef4fa40-aece-4d53-b7b6-c65e1e6df185" 00:28:14.241 ], 00:28:14.241 "product_name": "Malloc disk", 00:28:14.241 "block_size": 512, 00:28:14.241 "num_blocks": 65536, 00:28:14.242 "uuid": "3ef4fa40-aece-4d53-b7b6-c65e1e6df185", 00:28:14.242 "assigned_rate_limits": { 00:28:14.242 "rw_ios_per_sec": 0, 00:28:14.242 "rw_mbytes_per_sec": 0, 00:28:14.242 "r_mbytes_per_sec": 0, 00:28:14.242 "w_mbytes_per_sec": 0 00:28:14.242 }, 00:28:14.242 "claimed": true, 00:28:14.242 "claim_type": "exclusive_write", 00:28:14.242 "zoned": false, 00:28:14.242 "supported_io_types": { 00:28:14.242 "read": true, 00:28:14.242 "write": true, 00:28:14.242 "unmap": true, 00:28:14.242 "flush": true, 00:28:14.242 "reset": true, 00:28:14.242 "nvme_admin": false, 00:28:14.242 "nvme_io": false, 00:28:14.242 "nvme_io_md": false, 00:28:14.242 "write_zeroes": true, 00:28:14.242 "zcopy": true, 00:28:14.242 "get_zone_info": false, 00:28:14.242 "zone_management": false, 00:28:14.242 "zone_append": false, 00:28:14.242 "compare": false, 00:28:14.242 "compare_and_write": false, 00:28:14.242 "abort": true, 00:28:14.242 "seek_hole": false, 00:28:14.242 "seek_data": false, 00:28:14.242 "copy": true, 00:28:14.242 "nvme_iov_md": false 00:28:14.242 }, 00:28:14.242 "memory_domains": [ 00:28:14.242 { 00:28:14.242 "dma_device_id": "system", 00:28:14.242 "dma_device_type": 1 00:28:14.242 }, 00:28:14.242 { 00:28:14.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.242 "dma_device_type": 2 00:28:14.242 } 00:28:14.242 ], 00:28:14.242 "driver_specific": {} 00:28:14.242 } 00:28:14.242 ] 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.242 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.807 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.807 "name": "Existed_Raid", 00:28:14.807 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:14.807 "strip_size_kb": 0, 00:28:14.807 "state": "online", 00:28:14.807 "raid_level": "raid1", 00:28:14.807 "superblock": true, 00:28:14.807 "num_base_bdevs": 4, 00:28:14.807 "num_base_bdevs_discovered": 4, 00:28:14.807 "num_base_bdevs_operational": 4, 00:28:14.807 "base_bdevs_list": [ 00:28:14.807 { 00:28:14.807 "name": "BaseBdev1", 00:28:14.808 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:14.808 "is_configured": true, 00:28:14.808 "data_offset": 2048, 00:28:14.808 "data_size": 63488 00:28:14.808 }, 00:28:14.808 { 00:28:14.808 "name": "BaseBdev2", 00:28:14.808 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:14.808 "is_configured": true, 00:28:14.808 "data_offset": 2048, 00:28:14.808 "data_size": 63488 00:28:14.808 }, 00:28:14.808 { 00:28:14.808 "name": "BaseBdev3", 00:28:14.808 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:14.808 "is_configured": true, 00:28:14.808 "data_offset": 2048, 00:28:14.808 "data_size": 63488 00:28:14.808 }, 00:28:14.808 { 00:28:14.808 "name": "BaseBdev4", 00:28:14.808 "uuid": "3ef4fa40-aece-4d53-b7b6-c65e1e6df185", 00:28:14.808 "is_configured": true, 00:28:14.808 "data_offset": 2048, 00:28:14.808 "data_size": 63488 00:28:14.808 } 00:28:14.808 ] 00:28:14.808 }' 00:28:14.808 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.808 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:15.374 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:15.374 [2024-07-25 14:11:04.398427] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:15.702 "name": "Existed_Raid", 00:28:15.702 "aliases": [ 00:28:15.702 "d417192c-72ae-4749-a3f5-83ec38942473" 00:28:15.702 ], 00:28:15.702 "product_name": "Raid Volume", 00:28:15.702 "block_size": 512, 00:28:15.702 "num_blocks": 63488, 00:28:15.702 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:15.702 "assigned_rate_limits": { 00:28:15.702 "rw_ios_per_sec": 0, 00:28:15.702 "rw_mbytes_per_sec": 0, 00:28:15.702 "r_mbytes_per_sec": 0, 00:28:15.702 "w_mbytes_per_sec": 0 00:28:15.702 }, 00:28:15.702 "claimed": false, 00:28:15.702 "zoned": false, 00:28:15.702 "supported_io_types": { 00:28:15.702 "read": true, 00:28:15.702 "write": true, 00:28:15.702 "unmap": false, 00:28:15.702 "flush": false, 00:28:15.702 "reset": true, 00:28:15.702 "nvme_admin": false, 00:28:15.702 "nvme_io": false, 00:28:15.702 "nvme_io_md": false, 00:28:15.702 "write_zeroes": true, 00:28:15.702 "zcopy": false, 00:28:15.702 "get_zone_info": false, 00:28:15.702 "zone_management": false, 00:28:15.702 "zone_append": false, 00:28:15.702 "compare": false, 00:28:15.702 "compare_and_write": false, 00:28:15.702 "abort": false, 00:28:15.702 "seek_hole": false, 00:28:15.702 "seek_data": false, 00:28:15.702 "copy": false, 00:28:15.702 "nvme_iov_md": false 00:28:15.702 }, 00:28:15.702 "memory_domains": [ 00:28:15.702 { 00:28:15.702 "dma_device_id": "system", 00:28:15.702 "dma_device_type": 1 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.702 "dma_device_type": 2 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "system", 00:28:15.702 "dma_device_type": 1 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.702 "dma_device_type": 2 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "system", 00:28:15.702 "dma_device_type": 1 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.702 "dma_device_type": 2 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "system", 00:28:15.702 "dma_device_type": 1 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.702 "dma_device_type": 2 00:28:15.702 } 00:28:15.702 ], 00:28:15.702 "driver_specific": { 00:28:15.702 "raid": { 00:28:15.702 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:15.702 "strip_size_kb": 0, 00:28:15.702 "state": "online", 00:28:15.702 "raid_level": "raid1", 00:28:15.702 "superblock": true, 00:28:15.702 "num_base_bdevs": 4, 00:28:15.702 "num_base_bdevs_discovered": 4, 00:28:15.702 "num_base_bdevs_operational": 4, 00:28:15.702 "base_bdevs_list": [ 00:28:15.702 { 00:28:15.702 "name": "BaseBdev1", 00:28:15.702 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:15.702 "is_configured": true, 00:28:15.702 "data_offset": 2048, 00:28:15.702 "data_size": 63488 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "name": "BaseBdev2", 00:28:15.702 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:15.702 "is_configured": true, 00:28:15.702 "data_offset": 2048, 00:28:15.702 "data_size": 63488 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "name": "BaseBdev3", 00:28:15.702 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:15.702 "is_configured": true, 00:28:15.702 "data_offset": 2048, 00:28:15.702 "data_size": 63488 00:28:15.702 }, 00:28:15.702 { 00:28:15.702 "name": "BaseBdev4", 00:28:15.702 "uuid": "3ef4fa40-aece-4d53-b7b6-c65e1e6df185", 00:28:15.702 "is_configured": true, 00:28:15.702 "data_offset": 2048, 00:28:15.702 "data_size": 63488 00:28:15.702 } 00:28:15.702 ] 00:28:15.702 } 00:28:15.702 } 00:28:15.702 }' 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:15.702 BaseBdev2 00:28:15.702 BaseBdev3 00:28:15.702 BaseBdev4' 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:15.702 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:15.964 "name": "BaseBdev1", 00:28:15.964 "aliases": [ 00:28:15.964 "a7e5a335-fa27-4879-913d-fd82a93c6261" 00:28:15.964 ], 00:28:15.964 "product_name": "Malloc disk", 00:28:15.964 "block_size": 512, 00:28:15.964 "num_blocks": 65536, 00:28:15.964 "uuid": "a7e5a335-fa27-4879-913d-fd82a93c6261", 00:28:15.964 "assigned_rate_limits": { 00:28:15.964 "rw_ios_per_sec": 0, 00:28:15.964 "rw_mbytes_per_sec": 0, 00:28:15.964 "r_mbytes_per_sec": 0, 00:28:15.964 "w_mbytes_per_sec": 0 00:28:15.964 }, 00:28:15.964 "claimed": true, 00:28:15.964 "claim_type": "exclusive_write", 00:28:15.964 "zoned": false, 00:28:15.964 "supported_io_types": { 00:28:15.964 "read": true, 00:28:15.964 "write": true, 00:28:15.964 "unmap": true, 00:28:15.964 "flush": true, 00:28:15.964 "reset": true, 00:28:15.964 "nvme_admin": false, 00:28:15.964 "nvme_io": false, 00:28:15.964 "nvme_io_md": false, 00:28:15.964 "write_zeroes": true, 00:28:15.964 "zcopy": true, 00:28:15.964 "get_zone_info": false, 00:28:15.964 "zone_management": false, 00:28:15.964 "zone_append": false, 00:28:15.964 "compare": false, 00:28:15.964 "compare_and_write": false, 00:28:15.964 "abort": true, 00:28:15.964 "seek_hole": false, 00:28:15.964 "seek_data": false, 00:28:15.964 "copy": true, 00:28:15.964 "nvme_iov_md": false 00:28:15.964 }, 00:28:15.964 "memory_domains": [ 00:28:15.964 { 00:28:15.964 "dma_device_id": "system", 00:28:15.964 "dma_device_type": 1 00:28:15.964 }, 00:28:15.964 { 00:28:15.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.964 "dma_device_type": 2 00:28:15.964 } 00:28:15.964 ], 00:28:15.964 "driver_specific": {} 00:28:15.964 }' 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:15.964 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:16.222 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:16.479 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:16.479 "name": "BaseBdev2", 00:28:16.479 "aliases": [ 00:28:16.480 "a9f6c4c8-cb27-4a93-89de-03bb7323a492" 00:28:16.480 ], 00:28:16.480 "product_name": "Malloc disk", 00:28:16.480 "block_size": 512, 00:28:16.480 "num_blocks": 65536, 00:28:16.480 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:16.480 "assigned_rate_limits": { 00:28:16.480 "rw_ios_per_sec": 0, 00:28:16.480 "rw_mbytes_per_sec": 0, 00:28:16.480 "r_mbytes_per_sec": 0, 00:28:16.480 "w_mbytes_per_sec": 0 00:28:16.480 }, 00:28:16.480 "claimed": true, 00:28:16.480 "claim_type": "exclusive_write", 00:28:16.480 "zoned": false, 00:28:16.480 "supported_io_types": { 00:28:16.480 "read": true, 00:28:16.480 "write": true, 00:28:16.480 "unmap": true, 00:28:16.480 "flush": true, 00:28:16.480 "reset": true, 00:28:16.480 "nvme_admin": false, 00:28:16.480 "nvme_io": false, 00:28:16.480 "nvme_io_md": false, 00:28:16.480 "write_zeroes": true, 00:28:16.480 "zcopy": true, 00:28:16.480 "get_zone_info": false, 00:28:16.480 "zone_management": false, 00:28:16.480 "zone_append": false, 00:28:16.480 "compare": false, 00:28:16.480 "compare_and_write": false, 00:28:16.480 "abort": true, 00:28:16.480 "seek_hole": false, 00:28:16.480 "seek_data": false, 00:28:16.480 "copy": true, 00:28:16.480 "nvme_iov_md": false 00:28:16.480 }, 00:28:16.480 "memory_domains": [ 00:28:16.480 { 00:28:16.480 "dma_device_id": "system", 00:28:16.480 "dma_device_type": 1 00:28:16.480 }, 00:28:16.480 { 00:28:16.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.480 "dma_device_type": 2 00:28:16.480 } 00:28:16.480 ], 00:28:16.480 "driver_specific": {} 00:28:16.480 }' 00:28:16.480 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:16.480 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:16.737 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.994 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:16.994 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:16.994 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:16.994 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:16.994 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:17.251 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:17.251 "name": "BaseBdev3", 00:28:17.251 "aliases": [ 00:28:17.252 "ee378a41-4ff0-420b-a543-83f4e6e65079" 00:28:17.252 ], 00:28:17.252 "product_name": "Malloc disk", 00:28:17.252 "block_size": 512, 00:28:17.252 "num_blocks": 65536, 00:28:17.252 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:17.252 "assigned_rate_limits": { 00:28:17.252 "rw_ios_per_sec": 0, 00:28:17.252 "rw_mbytes_per_sec": 0, 00:28:17.252 "r_mbytes_per_sec": 0, 00:28:17.252 "w_mbytes_per_sec": 0 00:28:17.252 }, 00:28:17.252 "claimed": true, 00:28:17.252 "claim_type": "exclusive_write", 00:28:17.252 "zoned": false, 00:28:17.252 "supported_io_types": { 00:28:17.252 "read": true, 00:28:17.252 "write": true, 00:28:17.252 "unmap": true, 00:28:17.252 "flush": true, 00:28:17.252 "reset": true, 00:28:17.252 "nvme_admin": false, 00:28:17.252 "nvme_io": false, 00:28:17.252 "nvme_io_md": false, 00:28:17.252 "write_zeroes": true, 00:28:17.252 "zcopy": true, 00:28:17.252 "get_zone_info": false, 00:28:17.252 "zone_management": false, 00:28:17.252 "zone_append": false, 00:28:17.252 "compare": false, 00:28:17.252 "compare_and_write": false, 00:28:17.252 "abort": true, 00:28:17.252 "seek_hole": false, 00:28:17.252 "seek_data": false, 00:28:17.252 "copy": true, 00:28:17.252 "nvme_iov_md": false 00:28:17.252 }, 00:28:17.252 "memory_domains": [ 00:28:17.252 { 00:28:17.252 "dma_device_id": "system", 00:28:17.252 "dma_device_type": 1 00:28:17.252 }, 00:28:17.252 { 00:28:17.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.252 "dma_device_type": 2 00:28:17.252 } 00:28:17.252 ], 00:28:17.252 "driver_specific": {} 00:28:17.252 }' 00:28:17.252 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.252 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:17.252 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:17.252 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.510 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:17.768 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:17.768 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:17.768 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:17.768 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:18.026 "name": "BaseBdev4", 00:28:18.026 "aliases": [ 00:28:18.026 "3ef4fa40-aece-4d53-b7b6-c65e1e6df185" 00:28:18.026 ], 00:28:18.026 "product_name": "Malloc disk", 00:28:18.026 "block_size": 512, 00:28:18.026 "num_blocks": 65536, 00:28:18.026 "uuid": "3ef4fa40-aece-4d53-b7b6-c65e1e6df185", 00:28:18.026 "assigned_rate_limits": { 00:28:18.026 "rw_ios_per_sec": 0, 00:28:18.026 "rw_mbytes_per_sec": 0, 00:28:18.026 "r_mbytes_per_sec": 0, 00:28:18.026 "w_mbytes_per_sec": 0 00:28:18.026 }, 00:28:18.026 "claimed": true, 00:28:18.026 "claim_type": "exclusive_write", 00:28:18.026 "zoned": false, 00:28:18.026 "supported_io_types": { 00:28:18.026 "read": true, 00:28:18.026 "write": true, 00:28:18.026 "unmap": true, 00:28:18.026 "flush": true, 00:28:18.026 "reset": true, 00:28:18.026 "nvme_admin": false, 00:28:18.026 "nvme_io": false, 00:28:18.026 "nvme_io_md": false, 00:28:18.026 "write_zeroes": true, 00:28:18.026 "zcopy": true, 00:28:18.026 "get_zone_info": false, 00:28:18.026 "zone_management": false, 00:28:18.026 "zone_append": false, 00:28:18.026 "compare": false, 00:28:18.026 "compare_and_write": false, 00:28:18.026 "abort": true, 00:28:18.026 "seek_hole": false, 00:28:18.026 "seek_data": false, 00:28:18.026 "copy": true, 00:28:18.026 "nvme_iov_md": false 00:28:18.026 }, 00:28:18.026 "memory_domains": [ 00:28:18.026 { 00:28:18.026 "dma_device_id": "system", 00:28:18.026 "dma_device_type": 1 00:28:18.026 }, 00:28:18.026 { 00:28:18.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.026 "dma_device_type": 2 00:28:18.026 } 00:28:18.026 ], 00:28:18.026 "driver_specific": {} 00:28:18.026 }' 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.026 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:18.026 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:18.026 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.026 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:18.284 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:18.284 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.284 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:18.284 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:18.284 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:18.541 [2024-07-25 14:11:07.482848] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:18.541 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.542 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.542 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.800 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.058 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.058 "name": "Existed_Raid", 00:28:19.058 "uuid": "d417192c-72ae-4749-a3f5-83ec38942473", 00:28:19.058 "strip_size_kb": 0, 00:28:19.058 "state": "online", 00:28:19.058 "raid_level": "raid1", 00:28:19.058 "superblock": true, 00:28:19.058 "num_base_bdevs": 4, 00:28:19.058 "num_base_bdevs_discovered": 3, 00:28:19.058 "num_base_bdevs_operational": 3, 00:28:19.058 "base_bdevs_list": [ 00:28:19.058 { 00:28:19.058 "name": null, 00:28:19.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.058 "is_configured": false, 00:28:19.058 "data_offset": 2048, 00:28:19.058 "data_size": 63488 00:28:19.058 }, 00:28:19.058 { 00:28:19.058 "name": "BaseBdev2", 00:28:19.058 "uuid": "a9f6c4c8-cb27-4a93-89de-03bb7323a492", 00:28:19.058 "is_configured": true, 00:28:19.058 "data_offset": 2048, 00:28:19.058 "data_size": 63488 00:28:19.058 }, 00:28:19.058 { 00:28:19.058 "name": "BaseBdev3", 00:28:19.058 "uuid": "ee378a41-4ff0-420b-a543-83f4e6e65079", 00:28:19.058 "is_configured": true, 00:28:19.058 "data_offset": 2048, 00:28:19.058 "data_size": 63488 00:28:19.058 }, 00:28:19.058 { 00:28:19.058 "name": "BaseBdev4", 00:28:19.058 "uuid": "3ef4fa40-aece-4d53-b7b6-c65e1e6df185", 00:28:19.058 "is_configured": true, 00:28:19.058 "data_offset": 2048, 00:28:19.058 "data_size": 63488 00:28:19.058 } 00:28:19.058 ] 00:28:19.058 }' 00:28:19.058 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.058 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.697 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:19.697 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:19.697 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.697 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:19.955 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:19.955 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:19.955 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:20.213 [2024-07-25 14:11:09.062517] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:20.213 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:20.213 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:20.213 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:20.213 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.471 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:20.471 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:20.471 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:20.729 [2024-07-25 14:11:09.649592] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:20.729 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:20.729 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:20.729 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:20.729 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.987 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:20.987 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:20.987 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:21.245 [2024-07-25 14:11:10.242476] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:21.245 [2024-07-25 14:11:10.242813] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:21.502 [2024-07-25 14:11:10.327247] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:21.502 [2024-07-25 14:11:10.327451] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:21.502 [2024-07-25 14:11:10.327580] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:28:21.502 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:21.502 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:21.502 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.502 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:21.760 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:22.018 BaseBdev2 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:22.018 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:22.275 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:22.533 [ 00:28:22.533 { 00:28:22.533 "name": "BaseBdev2", 00:28:22.533 "aliases": [ 00:28:22.533 "69d3f531-26fe-4d8a-9ab5-e6e5457977f6" 00:28:22.533 ], 00:28:22.533 "product_name": "Malloc disk", 00:28:22.533 "block_size": 512, 00:28:22.533 "num_blocks": 65536, 00:28:22.533 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:22.533 "assigned_rate_limits": { 00:28:22.533 "rw_ios_per_sec": 0, 00:28:22.533 "rw_mbytes_per_sec": 0, 00:28:22.533 "r_mbytes_per_sec": 0, 00:28:22.533 "w_mbytes_per_sec": 0 00:28:22.533 }, 00:28:22.533 "claimed": false, 00:28:22.533 "zoned": false, 00:28:22.533 "supported_io_types": { 00:28:22.533 "read": true, 00:28:22.533 "write": true, 00:28:22.533 "unmap": true, 00:28:22.533 "flush": true, 00:28:22.533 "reset": true, 00:28:22.533 "nvme_admin": false, 00:28:22.533 "nvme_io": false, 00:28:22.533 "nvme_io_md": false, 00:28:22.533 "write_zeroes": true, 00:28:22.533 "zcopy": true, 00:28:22.533 "get_zone_info": false, 00:28:22.533 "zone_management": false, 00:28:22.533 "zone_append": false, 00:28:22.533 "compare": false, 00:28:22.533 "compare_and_write": false, 00:28:22.533 "abort": true, 00:28:22.533 "seek_hole": false, 00:28:22.533 "seek_data": false, 00:28:22.533 "copy": true, 00:28:22.533 "nvme_iov_md": false 00:28:22.533 }, 00:28:22.533 "memory_domains": [ 00:28:22.533 { 00:28:22.533 "dma_device_id": "system", 00:28:22.533 "dma_device_type": 1 00:28:22.533 }, 00:28:22.533 { 00:28:22.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.533 "dma_device_type": 2 00:28:22.533 } 00:28:22.533 ], 00:28:22.533 "driver_specific": {} 00:28:22.533 } 00:28:22.533 ] 00:28:22.533 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:22.533 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:22.533 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:22.533 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:22.791 BaseBdev3 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:22.791 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:23.049 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:23.307 [ 00:28:23.307 { 00:28:23.307 "name": "BaseBdev3", 00:28:23.307 "aliases": [ 00:28:23.307 "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42" 00:28:23.307 ], 00:28:23.307 "product_name": "Malloc disk", 00:28:23.307 "block_size": 512, 00:28:23.307 "num_blocks": 65536, 00:28:23.307 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:23.307 "assigned_rate_limits": { 00:28:23.307 "rw_ios_per_sec": 0, 00:28:23.307 "rw_mbytes_per_sec": 0, 00:28:23.307 "r_mbytes_per_sec": 0, 00:28:23.307 "w_mbytes_per_sec": 0 00:28:23.307 }, 00:28:23.307 "claimed": false, 00:28:23.307 "zoned": false, 00:28:23.307 "supported_io_types": { 00:28:23.307 "read": true, 00:28:23.307 "write": true, 00:28:23.307 "unmap": true, 00:28:23.307 "flush": true, 00:28:23.307 "reset": true, 00:28:23.307 "nvme_admin": false, 00:28:23.307 "nvme_io": false, 00:28:23.307 "nvme_io_md": false, 00:28:23.307 "write_zeroes": true, 00:28:23.307 "zcopy": true, 00:28:23.307 "get_zone_info": false, 00:28:23.307 "zone_management": false, 00:28:23.307 "zone_append": false, 00:28:23.307 "compare": false, 00:28:23.307 "compare_and_write": false, 00:28:23.307 "abort": true, 00:28:23.307 "seek_hole": false, 00:28:23.307 "seek_data": false, 00:28:23.307 "copy": true, 00:28:23.307 "nvme_iov_md": false 00:28:23.307 }, 00:28:23.307 "memory_domains": [ 00:28:23.307 { 00:28:23.307 "dma_device_id": "system", 00:28:23.307 "dma_device_type": 1 00:28:23.307 }, 00:28:23.307 { 00:28:23.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.307 "dma_device_type": 2 00:28:23.307 } 00:28:23.307 ], 00:28:23.307 "driver_specific": {} 00:28:23.307 } 00:28:23.307 ] 00:28:23.307 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:23.307 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:23.307 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:23.307 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:23.565 BaseBdev4 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:23.565 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:23.841 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:24.099 [ 00:28:24.099 { 00:28:24.099 "name": "BaseBdev4", 00:28:24.099 "aliases": [ 00:28:24.099 "3bfb0320-99c2-41b4-b323-9517b1270a1e" 00:28:24.099 ], 00:28:24.099 "product_name": "Malloc disk", 00:28:24.099 "block_size": 512, 00:28:24.099 "num_blocks": 65536, 00:28:24.099 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:24.099 "assigned_rate_limits": { 00:28:24.099 "rw_ios_per_sec": 0, 00:28:24.099 "rw_mbytes_per_sec": 0, 00:28:24.099 "r_mbytes_per_sec": 0, 00:28:24.099 "w_mbytes_per_sec": 0 00:28:24.099 }, 00:28:24.099 "claimed": false, 00:28:24.099 "zoned": false, 00:28:24.099 "supported_io_types": { 00:28:24.099 "read": true, 00:28:24.099 "write": true, 00:28:24.099 "unmap": true, 00:28:24.099 "flush": true, 00:28:24.099 "reset": true, 00:28:24.099 "nvme_admin": false, 00:28:24.099 "nvme_io": false, 00:28:24.099 "nvme_io_md": false, 00:28:24.099 "write_zeroes": true, 00:28:24.099 "zcopy": true, 00:28:24.099 "get_zone_info": false, 00:28:24.099 "zone_management": false, 00:28:24.099 "zone_append": false, 00:28:24.099 "compare": false, 00:28:24.099 "compare_and_write": false, 00:28:24.099 "abort": true, 00:28:24.099 "seek_hole": false, 00:28:24.099 "seek_data": false, 00:28:24.099 "copy": true, 00:28:24.099 "nvme_iov_md": false 00:28:24.099 }, 00:28:24.099 "memory_domains": [ 00:28:24.099 { 00:28:24.099 "dma_device_id": "system", 00:28:24.099 "dma_device_type": 1 00:28:24.099 }, 00:28:24.099 { 00:28:24.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.099 "dma_device_type": 2 00:28:24.099 } 00:28:24.099 ], 00:28:24.099 "driver_specific": {} 00:28:24.099 } 00:28:24.099 ] 00:28:24.099 14:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:24.099 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:24.099 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:24.099 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:24.357 [2024-07-25 14:11:13.346297] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:24.357 [2024-07-25 14:11:13.346597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:24.357 [2024-07-25 14:11:13.346810] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:24.357 [2024-07-25 14:11:13.349341] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:24.357 [2024-07-25 14:11:13.349549] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:24.357 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:24.357 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:24.357 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:24.357 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.358 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:24.924 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.924 "name": "Existed_Raid", 00:28:24.924 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:24.924 "strip_size_kb": 0, 00:28:24.924 "state": "configuring", 00:28:24.924 "raid_level": "raid1", 00:28:24.924 "superblock": true, 00:28:24.924 "num_base_bdevs": 4, 00:28:24.924 "num_base_bdevs_discovered": 3, 00:28:24.924 "num_base_bdevs_operational": 4, 00:28:24.924 "base_bdevs_list": [ 00:28:24.924 { 00:28:24.924 "name": "BaseBdev1", 00:28:24.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.924 "is_configured": false, 00:28:24.924 "data_offset": 0, 00:28:24.924 "data_size": 0 00:28:24.924 }, 00:28:24.924 { 00:28:24.924 "name": "BaseBdev2", 00:28:24.924 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:24.924 "is_configured": true, 00:28:24.924 "data_offset": 2048, 00:28:24.924 "data_size": 63488 00:28:24.924 }, 00:28:24.924 { 00:28:24.924 "name": "BaseBdev3", 00:28:24.924 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:24.924 "is_configured": true, 00:28:24.924 "data_offset": 2048, 00:28:24.924 "data_size": 63488 00:28:24.924 }, 00:28:24.924 { 00:28:24.924 "name": "BaseBdev4", 00:28:24.924 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:24.924 "is_configured": true, 00:28:24.924 "data_offset": 2048, 00:28:24.924 "data_size": 63488 00:28:24.924 } 00:28:24.924 ] 00:28:24.924 }' 00:28:24.924 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.924 14:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.493 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:25.753 [2024-07-25 14:11:14.594585] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.753 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:26.012 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.012 "name": "Existed_Raid", 00:28:26.012 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:26.012 "strip_size_kb": 0, 00:28:26.012 "state": "configuring", 00:28:26.012 "raid_level": "raid1", 00:28:26.012 "superblock": true, 00:28:26.012 "num_base_bdevs": 4, 00:28:26.012 "num_base_bdevs_discovered": 2, 00:28:26.012 "num_base_bdevs_operational": 4, 00:28:26.012 "base_bdevs_list": [ 00:28:26.012 { 00:28:26.012 "name": "BaseBdev1", 00:28:26.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.012 "is_configured": false, 00:28:26.012 "data_offset": 0, 00:28:26.012 "data_size": 0 00:28:26.012 }, 00:28:26.012 { 00:28:26.012 "name": null, 00:28:26.013 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:26.013 "is_configured": false, 00:28:26.013 "data_offset": 2048, 00:28:26.013 "data_size": 63488 00:28:26.013 }, 00:28:26.013 { 00:28:26.013 "name": "BaseBdev3", 00:28:26.013 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:26.013 "is_configured": true, 00:28:26.013 "data_offset": 2048, 00:28:26.013 "data_size": 63488 00:28:26.013 }, 00:28:26.013 { 00:28:26.013 "name": "BaseBdev4", 00:28:26.013 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:26.013 "is_configured": true, 00:28:26.013 "data_offset": 2048, 00:28:26.013 "data_size": 63488 00:28:26.013 } 00:28:26.013 ] 00:28:26.013 }' 00:28:26.013 14:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.013 14:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.581 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.581 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:26.839 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:26.839 14:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:27.096 [2024-07-25 14:11:16.038054] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:27.096 BaseBdev1 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:27.096 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:27.354 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:27.612 [ 00:28:27.612 { 00:28:27.612 "name": "BaseBdev1", 00:28:27.612 "aliases": [ 00:28:27.612 "9417207c-96f0-4be6-8a28-d18c60715ada" 00:28:27.612 ], 00:28:27.612 "product_name": "Malloc disk", 00:28:27.612 "block_size": 512, 00:28:27.612 "num_blocks": 65536, 00:28:27.612 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:27.612 "assigned_rate_limits": { 00:28:27.612 "rw_ios_per_sec": 0, 00:28:27.612 "rw_mbytes_per_sec": 0, 00:28:27.612 "r_mbytes_per_sec": 0, 00:28:27.612 "w_mbytes_per_sec": 0 00:28:27.612 }, 00:28:27.612 "claimed": true, 00:28:27.612 "claim_type": "exclusive_write", 00:28:27.612 "zoned": false, 00:28:27.612 "supported_io_types": { 00:28:27.612 "read": true, 00:28:27.612 "write": true, 00:28:27.612 "unmap": true, 00:28:27.612 "flush": true, 00:28:27.612 "reset": true, 00:28:27.612 "nvme_admin": false, 00:28:27.612 "nvme_io": false, 00:28:27.612 "nvme_io_md": false, 00:28:27.612 "write_zeroes": true, 00:28:27.612 "zcopy": true, 00:28:27.612 "get_zone_info": false, 00:28:27.612 "zone_management": false, 00:28:27.612 "zone_append": false, 00:28:27.612 "compare": false, 00:28:27.612 "compare_and_write": false, 00:28:27.612 "abort": true, 00:28:27.612 "seek_hole": false, 00:28:27.612 "seek_data": false, 00:28:27.612 "copy": true, 00:28:27.612 "nvme_iov_md": false 00:28:27.612 }, 00:28:27.612 "memory_domains": [ 00:28:27.612 { 00:28:27.612 "dma_device_id": "system", 00:28:27.612 "dma_device_type": 1 00:28:27.612 }, 00:28:27.612 { 00:28:27.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.612 "dma_device_type": 2 00:28:27.612 } 00:28:27.612 ], 00:28:27.612 "driver_specific": {} 00:28:27.612 } 00:28:27.612 ] 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.612 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:27.870 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:27.870 "name": "Existed_Raid", 00:28:27.870 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:27.870 "strip_size_kb": 0, 00:28:27.870 "state": "configuring", 00:28:27.870 "raid_level": "raid1", 00:28:27.870 "superblock": true, 00:28:27.870 "num_base_bdevs": 4, 00:28:27.870 "num_base_bdevs_discovered": 3, 00:28:27.870 "num_base_bdevs_operational": 4, 00:28:27.870 "base_bdevs_list": [ 00:28:27.870 { 00:28:27.870 "name": "BaseBdev1", 00:28:27.870 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:27.870 "is_configured": true, 00:28:27.870 "data_offset": 2048, 00:28:27.870 "data_size": 63488 00:28:27.870 }, 00:28:27.870 { 00:28:27.870 "name": null, 00:28:27.870 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:27.870 "is_configured": false, 00:28:27.870 "data_offset": 2048, 00:28:27.870 "data_size": 63488 00:28:27.870 }, 00:28:27.870 { 00:28:27.870 "name": "BaseBdev3", 00:28:27.870 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:27.870 "is_configured": true, 00:28:27.870 "data_offset": 2048, 00:28:27.870 "data_size": 63488 00:28:27.870 }, 00:28:27.870 { 00:28:27.870 "name": "BaseBdev4", 00:28:27.870 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:27.870 "is_configured": true, 00:28:27.870 "data_offset": 2048, 00:28:27.870 "data_size": 63488 00:28:27.870 } 00:28:27.870 ] 00:28:27.870 }' 00:28:27.870 14:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:27.870 14:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.436 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:28.436 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.694 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:28.694 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:28.951 [2024-07-25 14:11:17.926589] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:28.951 14:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.209 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.209 "name": "Existed_Raid", 00:28:29.209 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:29.209 "strip_size_kb": 0, 00:28:29.209 "state": "configuring", 00:28:29.209 "raid_level": "raid1", 00:28:29.209 "superblock": true, 00:28:29.209 "num_base_bdevs": 4, 00:28:29.209 "num_base_bdevs_discovered": 2, 00:28:29.209 "num_base_bdevs_operational": 4, 00:28:29.209 "base_bdevs_list": [ 00:28:29.209 { 00:28:29.209 "name": "BaseBdev1", 00:28:29.209 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:29.209 "is_configured": true, 00:28:29.209 "data_offset": 2048, 00:28:29.209 "data_size": 63488 00:28:29.209 }, 00:28:29.209 { 00:28:29.209 "name": null, 00:28:29.209 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:29.209 "is_configured": false, 00:28:29.209 "data_offset": 2048, 00:28:29.209 "data_size": 63488 00:28:29.209 }, 00:28:29.209 { 00:28:29.209 "name": null, 00:28:29.209 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:29.209 "is_configured": false, 00:28:29.209 "data_offset": 2048, 00:28:29.209 "data_size": 63488 00:28:29.209 }, 00:28:29.209 { 00:28:29.209 "name": "BaseBdev4", 00:28:29.209 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:29.209 "is_configured": true, 00:28:29.209 "data_offset": 2048, 00:28:29.209 "data_size": 63488 00:28:29.209 } 00:28:29.209 ] 00:28:29.209 }' 00:28:29.209 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.209 14:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:29.825 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.825 14:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:30.090 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:30.090 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:30.347 [2024-07-25 14:11:19.294243] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.347 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.348 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:30.348 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.605 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.605 "name": "Existed_Raid", 00:28:30.605 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:30.605 "strip_size_kb": 0, 00:28:30.605 "state": "configuring", 00:28:30.605 "raid_level": "raid1", 00:28:30.605 "superblock": true, 00:28:30.605 "num_base_bdevs": 4, 00:28:30.605 "num_base_bdevs_discovered": 3, 00:28:30.605 "num_base_bdevs_operational": 4, 00:28:30.605 "base_bdevs_list": [ 00:28:30.605 { 00:28:30.605 "name": "BaseBdev1", 00:28:30.605 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:30.605 "is_configured": true, 00:28:30.605 "data_offset": 2048, 00:28:30.605 "data_size": 63488 00:28:30.605 }, 00:28:30.605 { 00:28:30.605 "name": null, 00:28:30.605 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:30.605 "is_configured": false, 00:28:30.605 "data_offset": 2048, 00:28:30.605 "data_size": 63488 00:28:30.605 }, 00:28:30.605 { 00:28:30.605 "name": "BaseBdev3", 00:28:30.605 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:30.605 "is_configured": true, 00:28:30.605 "data_offset": 2048, 00:28:30.605 "data_size": 63488 00:28:30.605 }, 00:28:30.605 { 00:28:30.605 "name": "BaseBdev4", 00:28:30.605 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:30.605 "is_configured": true, 00:28:30.605 "data_offset": 2048, 00:28:30.605 "data_size": 63488 00:28:30.605 } 00:28:30.605 ] 00:28:30.605 }' 00:28:30.605 14:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.605 14:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.171 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.171 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:31.428 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:31.428 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:31.686 [2024-07-25 14:11:20.674682] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.944 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:32.201 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.201 "name": "Existed_Raid", 00:28:32.201 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:32.201 "strip_size_kb": 0, 00:28:32.201 "state": "configuring", 00:28:32.201 "raid_level": "raid1", 00:28:32.201 "superblock": true, 00:28:32.201 "num_base_bdevs": 4, 00:28:32.201 "num_base_bdevs_discovered": 2, 00:28:32.201 "num_base_bdevs_operational": 4, 00:28:32.201 "base_bdevs_list": [ 00:28:32.201 { 00:28:32.201 "name": null, 00:28:32.201 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:32.201 "is_configured": false, 00:28:32.201 "data_offset": 2048, 00:28:32.201 "data_size": 63488 00:28:32.201 }, 00:28:32.201 { 00:28:32.201 "name": null, 00:28:32.201 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:32.201 "is_configured": false, 00:28:32.201 "data_offset": 2048, 00:28:32.201 "data_size": 63488 00:28:32.201 }, 00:28:32.201 { 00:28:32.201 "name": "BaseBdev3", 00:28:32.201 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:32.201 "is_configured": true, 00:28:32.201 "data_offset": 2048, 00:28:32.201 "data_size": 63488 00:28:32.201 }, 00:28:32.201 { 00:28:32.201 "name": "BaseBdev4", 00:28:32.201 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:32.201 "is_configured": true, 00:28:32.201 "data_offset": 2048, 00:28:32.201 "data_size": 63488 00:28:32.201 } 00:28:32.201 ] 00:28:32.201 }' 00:28:32.201 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.201 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:32.765 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.765 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:33.022 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:33.022 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:33.280 [2024-07-25 14:11:22.160440] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.280 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:33.537 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:33.537 "name": "Existed_Raid", 00:28:33.537 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:33.537 "strip_size_kb": 0, 00:28:33.537 "state": "configuring", 00:28:33.537 "raid_level": "raid1", 00:28:33.537 "superblock": true, 00:28:33.537 "num_base_bdevs": 4, 00:28:33.537 "num_base_bdevs_discovered": 3, 00:28:33.537 "num_base_bdevs_operational": 4, 00:28:33.537 "base_bdevs_list": [ 00:28:33.537 { 00:28:33.537 "name": null, 00:28:33.537 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:33.537 "is_configured": false, 00:28:33.537 "data_offset": 2048, 00:28:33.537 "data_size": 63488 00:28:33.537 }, 00:28:33.537 { 00:28:33.537 "name": "BaseBdev2", 00:28:33.537 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:33.537 "is_configured": true, 00:28:33.537 "data_offset": 2048, 00:28:33.537 "data_size": 63488 00:28:33.537 }, 00:28:33.537 { 00:28:33.537 "name": "BaseBdev3", 00:28:33.537 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:33.537 "is_configured": true, 00:28:33.537 "data_offset": 2048, 00:28:33.537 "data_size": 63488 00:28:33.537 }, 00:28:33.537 { 00:28:33.537 "name": "BaseBdev4", 00:28:33.537 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:33.537 "is_configured": true, 00:28:33.537 "data_offset": 2048, 00:28:33.537 "data_size": 63488 00:28:33.537 } 00:28:33.537 ] 00:28:33.537 }' 00:28:33.537 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:33.537 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:34.142 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.142 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:34.400 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:34.400 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.400 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:34.657 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9417207c-96f0-4be6-8a28-d18c60715ada 00:28:34.915 [2024-07-25 14:11:23.863781] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:34.915 [2024-07-25 14:11:23.864291] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:28:34.915 [2024-07-25 14:11:23.864442] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:34.915 [2024-07-25 14:11:23.864610] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:34.915 [2024-07-25 14:11:23.865163] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:28:34.915 [2024-07-25 14:11:23.865302] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:28:34.915 NewBaseBdev 00:28:34.915 [2024-07-25 14:11:23.865555] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.915 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:34.915 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:34.915 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:34.915 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:34.916 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:34.916 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:34.916 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:35.174 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:35.433 [ 00:28:35.433 { 00:28:35.433 "name": "NewBaseBdev", 00:28:35.433 "aliases": [ 00:28:35.433 "9417207c-96f0-4be6-8a28-d18c60715ada" 00:28:35.433 ], 00:28:35.433 "product_name": "Malloc disk", 00:28:35.433 "block_size": 512, 00:28:35.433 "num_blocks": 65536, 00:28:35.433 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:35.433 "assigned_rate_limits": { 00:28:35.433 "rw_ios_per_sec": 0, 00:28:35.433 "rw_mbytes_per_sec": 0, 00:28:35.433 "r_mbytes_per_sec": 0, 00:28:35.433 "w_mbytes_per_sec": 0 00:28:35.433 }, 00:28:35.433 "claimed": true, 00:28:35.433 "claim_type": "exclusive_write", 00:28:35.433 "zoned": false, 00:28:35.433 "supported_io_types": { 00:28:35.433 "read": true, 00:28:35.433 "write": true, 00:28:35.433 "unmap": true, 00:28:35.433 "flush": true, 00:28:35.433 "reset": true, 00:28:35.433 "nvme_admin": false, 00:28:35.433 "nvme_io": false, 00:28:35.433 "nvme_io_md": false, 00:28:35.433 "write_zeroes": true, 00:28:35.433 "zcopy": true, 00:28:35.433 "get_zone_info": false, 00:28:35.433 "zone_management": false, 00:28:35.433 "zone_append": false, 00:28:35.433 "compare": false, 00:28:35.433 "compare_and_write": false, 00:28:35.433 "abort": true, 00:28:35.433 "seek_hole": false, 00:28:35.433 "seek_data": false, 00:28:35.433 "copy": true, 00:28:35.433 "nvme_iov_md": false 00:28:35.433 }, 00:28:35.433 "memory_domains": [ 00:28:35.433 { 00:28:35.433 "dma_device_id": "system", 00:28:35.433 "dma_device_type": 1 00:28:35.433 }, 00:28:35.433 { 00:28:35.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:35.433 "dma_device_type": 2 00:28:35.433 } 00:28:35.433 ], 00:28:35.433 "driver_specific": {} 00:28:35.433 } 00:28:35.433 ] 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.433 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:35.691 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.691 "name": "Existed_Raid", 00:28:35.691 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:35.691 "strip_size_kb": 0, 00:28:35.691 "state": "online", 00:28:35.691 "raid_level": "raid1", 00:28:35.691 "superblock": true, 00:28:35.691 "num_base_bdevs": 4, 00:28:35.691 "num_base_bdevs_discovered": 4, 00:28:35.691 "num_base_bdevs_operational": 4, 00:28:35.691 "base_bdevs_list": [ 00:28:35.691 { 00:28:35.691 "name": "NewBaseBdev", 00:28:35.691 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:35.691 "is_configured": true, 00:28:35.691 "data_offset": 2048, 00:28:35.691 "data_size": 63488 00:28:35.691 }, 00:28:35.691 { 00:28:35.691 "name": "BaseBdev2", 00:28:35.691 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:35.691 "is_configured": true, 00:28:35.691 "data_offset": 2048, 00:28:35.691 "data_size": 63488 00:28:35.691 }, 00:28:35.691 { 00:28:35.691 "name": "BaseBdev3", 00:28:35.691 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:35.691 "is_configured": true, 00:28:35.691 "data_offset": 2048, 00:28:35.691 "data_size": 63488 00:28:35.691 }, 00:28:35.691 { 00:28:35.691 "name": "BaseBdev4", 00:28:35.691 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:35.691 "is_configured": true, 00:28:35.691 "data_offset": 2048, 00:28:35.691 "data_size": 63488 00:28:35.691 } 00:28:35.691 ] 00:28:35.691 }' 00:28:35.691 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.691 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:36.271 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:36.529 [2024-07-25 14:11:25.476583] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:36.529 "name": "Existed_Raid", 00:28:36.529 "aliases": [ 00:28:36.529 "33a5054f-a974-491d-bdd8-26544ce6bbda" 00:28:36.529 ], 00:28:36.529 "product_name": "Raid Volume", 00:28:36.529 "block_size": 512, 00:28:36.529 "num_blocks": 63488, 00:28:36.529 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:36.529 "assigned_rate_limits": { 00:28:36.529 "rw_ios_per_sec": 0, 00:28:36.529 "rw_mbytes_per_sec": 0, 00:28:36.529 "r_mbytes_per_sec": 0, 00:28:36.529 "w_mbytes_per_sec": 0 00:28:36.529 }, 00:28:36.529 "claimed": false, 00:28:36.529 "zoned": false, 00:28:36.529 "supported_io_types": { 00:28:36.529 "read": true, 00:28:36.529 "write": true, 00:28:36.529 "unmap": false, 00:28:36.529 "flush": false, 00:28:36.529 "reset": true, 00:28:36.529 "nvme_admin": false, 00:28:36.529 "nvme_io": false, 00:28:36.529 "nvme_io_md": false, 00:28:36.529 "write_zeroes": true, 00:28:36.529 "zcopy": false, 00:28:36.529 "get_zone_info": false, 00:28:36.529 "zone_management": false, 00:28:36.529 "zone_append": false, 00:28:36.529 "compare": false, 00:28:36.529 "compare_and_write": false, 00:28:36.529 "abort": false, 00:28:36.529 "seek_hole": false, 00:28:36.529 "seek_data": false, 00:28:36.529 "copy": false, 00:28:36.529 "nvme_iov_md": false 00:28:36.529 }, 00:28:36.529 "memory_domains": [ 00:28:36.529 { 00:28:36.529 "dma_device_id": "system", 00:28:36.529 "dma_device_type": 1 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.529 "dma_device_type": 2 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "system", 00:28:36.529 "dma_device_type": 1 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.529 "dma_device_type": 2 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "system", 00:28:36.529 "dma_device_type": 1 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.529 "dma_device_type": 2 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "system", 00:28:36.529 "dma_device_type": 1 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.529 "dma_device_type": 2 00:28:36.529 } 00:28:36.529 ], 00:28:36.529 "driver_specific": { 00:28:36.529 "raid": { 00:28:36.529 "uuid": "33a5054f-a974-491d-bdd8-26544ce6bbda", 00:28:36.529 "strip_size_kb": 0, 00:28:36.529 "state": "online", 00:28:36.529 "raid_level": "raid1", 00:28:36.529 "superblock": true, 00:28:36.529 "num_base_bdevs": 4, 00:28:36.529 "num_base_bdevs_discovered": 4, 00:28:36.529 "num_base_bdevs_operational": 4, 00:28:36.529 "base_bdevs_list": [ 00:28:36.529 { 00:28:36.529 "name": "NewBaseBdev", 00:28:36.529 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:36.529 "is_configured": true, 00:28:36.529 "data_offset": 2048, 00:28:36.529 "data_size": 63488 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "name": "BaseBdev2", 00:28:36.529 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:36.529 "is_configured": true, 00:28:36.529 "data_offset": 2048, 00:28:36.529 "data_size": 63488 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "name": "BaseBdev3", 00:28:36.529 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:36.529 "is_configured": true, 00:28:36.529 "data_offset": 2048, 00:28:36.529 "data_size": 63488 00:28:36.529 }, 00:28:36.529 { 00:28:36.529 "name": "BaseBdev4", 00:28:36.529 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:36.529 "is_configured": true, 00:28:36.529 "data_offset": 2048, 00:28:36.529 "data_size": 63488 00:28:36.529 } 00:28:36.529 ] 00:28:36.529 } 00:28:36.529 } 00:28:36.529 }' 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:36.529 BaseBdev2 00:28:36.529 BaseBdev3 00:28:36.529 BaseBdev4' 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:36.529 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:36.787 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:36.787 "name": "NewBaseBdev", 00:28:36.787 "aliases": [ 00:28:36.787 "9417207c-96f0-4be6-8a28-d18c60715ada" 00:28:36.787 ], 00:28:36.787 "product_name": "Malloc disk", 00:28:36.787 "block_size": 512, 00:28:36.787 "num_blocks": 65536, 00:28:36.787 "uuid": "9417207c-96f0-4be6-8a28-d18c60715ada", 00:28:36.787 "assigned_rate_limits": { 00:28:36.787 "rw_ios_per_sec": 0, 00:28:36.787 "rw_mbytes_per_sec": 0, 00:28:36.787 "r_mbytes_per_sec": 0, 00:28:36.787 "w_mbytes_per_sec": 0 00:28:36.787 }, 00:28:36.787 "claimed": true, 00:28:36.787 "claim_type": "exclusive_write", 00:28:36.787 "zoned": false, 00:28:36.787 "supported_io_types": { 00:28:36.787 "read": true, 00:28:36.787 "write": true, 00:28:36.787 "unmap": true, 00:28:36.787 "flush": true, 00:28:36.787 "reset": true, 00:28:36.787 "nvme_admin": false, 00:28:36.787 "nvme_io": false, 00:28:36.787 "nvme_io_md": false, 00:28:36.787 "write_zeroes": true, 00:28:36.787 "zcopy": true, 00:28:36.787 "get_zone_info": false, 00:28:36.787 "zone_management": false, 00:28:36.787 "zone_append": false, 00:28:36.787 "compare": false, 00:28:36.787 "compare_and_write": false, 00:28:36.787 "abort": true, 00:28:36.787 "seek_hole": false, 00:28:36.787 "seek_data": false, 00:28:36.787 "copy": true, 00:28:36.787 "nvme_iov_md": false 00:28:36.787 }, 00:28:36.787 "memory_domains": [ 00:28:36.787 { 00:28:36.787 "dma_device_id": "system", 00:28:36.787 "dma_device_type": 1 00:28:36.787 }, 00:28:36.787 { 00:28:36.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:36.787 "dma_device_type": 2 00:28:36.787 } 00:28:36.787 ], 00:28:36.787 "driver_specific": {} 00:28:36.787 }' 00:28:36.787 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:37.045 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:37.045 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:37.045 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:37.045 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:37.302 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:37.302 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:37.302 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:37.302 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:37.302 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:37.560 "name": "BaseBdev2", 00:28:37.560 "aliases": [ 00:28:37.560 "69d3f531-26fe-4d8a-9ab5-e6e5457977f6" 00:28:37.560 ], 00:28:37.560 "product_name": "Malloc disk", 00:28:37.560 "block_size": 512, 00:28:37.560 "num_blocks": 65536, 00:28:37.560 "uuid": "69d3f531-26fe-4d8a-9ab5-e6e5457977f6", 00:28:37.560 "assigned_rate_limits": { 00:28:37.560 "rw_ios_per_sec": 0, 00:28:37.560 "rw_mbytes_per_sec": 0, 00:28:37.560 "r_mbytes_per_sec": 0, 00:28:37.560 "w_mbytes_per_sec": 0 00:28:37.560 }, 00:28:37.560 "claimed": true, 00:28:37.560 "claim_type": "exclusive_write", 00:28:37.560 "zoned": false, 00:28:37.560 "supported_io_types": { 00:28:37.560 "read": true, 00:28:37.560 "write": true, 00:28:37.560 "unmap": true, 00:28:37.560 "flush": true, 00:28:37.560 "reset": true, 00:28:37.560 "nvme_admin": false, 00:28:37.560 "nvme_io": false, 00:28:37.560 "nvme_io_md": false, 00:28:37.560 "write_zeroes": true, 00:28:37.560 "zcopy": true, 00:28:37.560 "get_zone_info": false, 00:28:37.560 "zone_management": false, 00:28:37.560 "zone_append": false, 00:28:37.560 "compare": false, 00:28:37.560 "compare_and_write": false, 00:28:37.560 "abort": true, 00:28:37.560 "seek_hole": false, 00:28:37.560 "seek_data": false, 00:28:37.560 "copy": true, 00:28:37.560 "nvme_iov_md": false 00:28:37.560 }, 00:28:37.560 "memory_domains": [ 00:28:37.560 { 00:28:37.560 "dma_device_id": "system", 00:28:37.560 "dma_device_type": 1 00:28:37.560 }, 00:28:37.560 { 00:28:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:37.560 "dma_device_type": 2 00:28:37.560 } 00:28:37.560 ], 00:28:37.560 "driver_specific": {} 00:28:37.560 }' 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:37.560 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:37.818 14:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:38.076 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:38.076 "name": "BaseBdev3", 00:28:38.076 "aliases": [ 00:28:38.076 "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42" 00:28:38.076 ], 00:28:38.076 "product_name": "Malloc disk", 00:28:38.076 "block_size": 512, 00:28:38.076 "num_blocks": 65536, 00:28:38.076 "uuid": "fe0f6e73-bf4e-4bea-be5f-dce7cff2af42", 00:28:38.076 "assigned_rate_limits": { 00:28:38.076 "rw_ios_per_sec": 0, 00:28:38.076 "rw_mbytes_per_sec": 0, 00:28:38.076 "r_mbytes_per_sec": 0, 00:28:38.076 "w_mbytes_per_sec": 0 00:28:38.076 }, 00:28:38.076 "claimed": true, 00:28:38.076 "claim_type": "exclusive_write", 00:28:38.076 "zoned": false, 00:28:38.076 "supported_io_types": { 00:28:38.076 "read": true, 00:28:38.076 "write": true, 00:28:38.076 "unmap": true, 00:28:38.076 "flush": true, 00:28:38.076 "reset": true, 00:28:38.076 "nvme_admin": false, 00:28:38.076 "nvme_io": false, 00:28:38.076 "nvme_io_md": false, 00:28:38.076 "write_zeroes": true, 00:28:38.076 "zcopy": true, 00:28:38.076 "get_zone_info": false, 00:28:38.076 "zone_management": false, 00:28:38.076 "zone_append": false, 00:28:38.076 "compare": false, 00:28:38.076 "compare_and_write": false, 00:28:38.076 "abort": true, 00:28:38.076 "seek_hole": false, 00:28:38.076 "seek_data": false, 00:28:38.076 "copy": true, 00:28:38.076 "nvme_iov_md": false 00:28:38.076 }, 00:28:38.076 "memory_domains": [ 00:28:38.076 { 00:28:38.076 "dma_device_id": "system", 00:28:38.076 "dma_device_type": 1 00:28:38.076 }, 00:28:38.076 { 00:28:38.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.076 "dma_device_type": 2 00:28:38.076 } 00:28:38.076 ], 00:28:38.076 "driver_specific": {} 00:28:38.076 }' 00:28:38.076 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:38.076 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:38.333 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:38.591 "name": "BaseBdev4", 00:28:38.591 "aliases": [ 00:28:38.591 "3bfb0320-99c2-41b4-b323-9517b1270a1e" 00:28:38.591 ], 00:28:38.591 "product_name": "Malloc disk", 00:28:38.591 "block_size": 512, 00:28:38.591 "num_blocks": 65536, 00:28:38.591 "uuid": "3bfb0320-99c2-41b4-b323-9517b1270a1e", 00:28:38.591 "assigned_rate_limits": { 00:28:38.591 "rw_ios_per_sec": 0, 00:28:38.591 "rw_mbytes_per_sec": 0, 00:28:38.591 "r_mbytes_per_sec": 0, 00:28:38.591 "w_mbytes_per_sec": 0 00:28:38.591 }, 00:28:38.591 "claimed": true, 00:28:38.591 "claim_type": "exclusive_write", 00:28:38.591 "zoned": false, 00:28:38.591 "supported_io_types": { 00:28:38.591 "read": true, 00:28:38.591 "write": true, 00:28:38.591 "unmap": true, 00:28:38.591 "flush": true, 00:28:38.591 "reset": true, 00:28:38.591 "nvme_admin": false, 00:28:38.591 "nvme_io": false, 00:28:38.591 "nvme_io_md": false, 00:28:38.591 "write_zeroes": true, 00:28:38.591 "zcopy": true, 00:28:38.591 "get_zone_info": false, 00:28:38.591 "zone_management": false, 00:28:38.591 "zone_append": false, 00:28:38.591 "compare": false, 00:28:38.591 "compare_and_write": false, 00:28:38.591 "abort": true, 00:28:38.591 "seek_hole": false, 00:28:38.591 "seek_data": false, 00:28:38.591 "copy": true, 00:28:38.591 "nvme_iov_md": false 00:28:38.591 }, 00:28:38.591 "memory_domains": [ 00:28:38.591 { 00:28:38.591 "dma_device_id": "system", 00:28:38.591 "dma_device_type": 1 00:28:38.591 }, 00:28:38.591 { 00:28:38.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.591 "dma_device_type": 2 00:28:38.591 } 00:28:38.591 ], 00:28:38.591 "driver_specific": {} 00:28:38.591 }' 00:28:38.591 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:38.849 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:39.106 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:39.106 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:39.106 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:39.106 14:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:39.106 14:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:39.106 14:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:39.363 [2024-07-25 14:11:28.256877] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:39.363 [2024-07-25 14:11:28.257061] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:39.363 [2024-07-25 14:11:28.257263] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:39.363 [2024-07-25 14:11:28.257689] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:39.363 [2024-07-25 14:11:28.257836] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 141870 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 141870 ']' 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 141870 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141870 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141870' 00:28:39.363 killing process with pid 141870 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 141870 00:28:39.363 [2024-07-25 14:11:28.297273] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:39.363 14:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 141870 00:28:39.620 [2024-07-25 14:11:28.620579] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:41.014 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:28:41.015 00:28:41.015 real 0m36.597s 00:28:41.015 user 1m8.056s 00:28:41.015 sys 0m4.167s 00:28:41.015 14:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:41.015 14:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.015 ************************************ 00:28:41.015 END TEST raid_state_function_test_sb 00:28:41.015 ************************************ 00:28:41.015 14:11:29 bdev_raid -- bdev/bdev_raid.sh@1023 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:28:41.015 14:11:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:41.015 14:11:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.015 14:11:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:41.015 ************************************ 00:28:41.015 START TEST raid_superblock_test 00:28:41.015 ************************************ 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=142993 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 142993 /var/tmp/spdk-raid.sock 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 142993 ']' 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:41.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.015 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.015 [2024-07-25 14:11:29.866436] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:28:41.015 [2024-07-25 14:11:29.866834] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142993 ] 00:28:41.015 [2024-07-25 14:11:30.037241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.272 [2024-07-25 14:11:30.274811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.530 [2024-07-25 14:11:30.472761] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:41.787 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:42.044 malloc1 00:28:42.302 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:42.559 [2024-07-25 14:11:31.370667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:42.559 [2024-07-25 14:11:31.370945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:42.559 [2024-07-25 14:11:31.371125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:42.559 [2024-07-25 14:11:31.371251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:42.559 [2024-07-25 14:11:31.374001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:42.559 [2024-07-25 14:11:31.374176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:42.560 pt1 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:42.560 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:42.817 malloc2 00:28:42.817 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:43.074 [2024-07-25 14:11:31.941001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:43.074 [2024-07-25 14:11:31.941281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:43.074 [2024-07-25 14:11:31.941444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:43.074 [2024-07-25 14:11:31.941569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:43.074 [2024-07-25 14:11:31.944162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:43.075 [2024-07-25 14:11:31.944337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:43.075 pt2 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:43.075 14:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:43.332 malloc3 00:28:43.332 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:43.590 [2024-07-25 14:11:32.503989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:43.590 [2024-07-25 14:11:32.504375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:43.590 [2024-07-25 14:11:32.504542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:43.590 [2024-07-25 14:11:32.504676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:43.590 [2024-07-25 14:11:32.507315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:43.590 [2024-07-25 14:11:32.507494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:43.590 pt3 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:43.590 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:43.847 malloc4 00:28:43.847 14:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:44.105 [2024-07-25 14:11:33.100709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:44.105 [2024-07-25 14:11:33.101129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.105 [2024-07-25 14:11:33.101294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:44.105 [2024-07-25 14:11:33.101424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.105 [2024-07-25 14:11:33.104053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.105 [2024-07-25 14:11:33.104232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:44.105 pt4 00:28:44.105 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:44.105 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:44.105 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:44.365 [2024-07-25 14:11:33.348833] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:44.365 [2024-07-25 14:11:33.351249] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:44.365 [2024-07-25 14:11:33.351468] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:44.365 [2024-07-25 14:11:33.351689] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:44.365 [2024-07-25 14:11:33.352049] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:28:44.365 [2024-07-25 14:11:33.352177] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:44.365 [2024-07-25 14:11:33.352466] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:44.365 [2024-07-25 14:11:33.353016] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:28:44.365 [2024-07-25 14:11:33.353144] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:28:44.365 [2024-07-25 14:11:33.353485] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.365 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.641 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:44.641 "name": "raid_bdev1", 00:28:44.641 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:44.641 "strip_size_kb": 0, 00:28:44.641 "state": "online", 00:28:44.641 "raid_level": "raid1", 00:28:44.641 "superblock": true, 00:28:44.641 "num_base_bdevs": 4, 00:28:44.641 "num_base_bdevs_discovered": 4, 00:28:44.641 "num_base_bdevs_operational": 4, 00:28:44.641 "base_bdevs_list": [ 00:28:44.641 { 00:28:44.641 "name": "pt1", 00:28:44.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:44.641 "is_configured": true, 00:28:44.641 "data_offset": 2048, 00:28:44.641 "data_size": 63488 00:28:44.641 }, 00:28:44.641 { 00:28:44.641 "name": "pt2", 00:28:44.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:44.641 "is_configured": true, 00:28:44.641 "data_offset": 2048, 00:28:44.641 "data_size": 63488 00:28:44.641 }, 00:28:44.641 { 00:28:44.641 "name": "pt3", 00:28:44.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:44.641 "is_configured": true, 00:28:44.641 "data_offset": 2048, 00:28:44.641 "data_size": 63488 00:28:44.641 }, 00:28:44.641 { 00:28:44.641 "name": "pt4", 00:28:44.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:44.641 "is_configured": true, 00:28:44.641 "data_offset": 2048, 00:28:44.641 "data_size": 63488 00:28:44.641 } 00:28:44.641 ] 00:28:44.641 }' 00:28:44.641 14:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:44.641 14:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:45.574 [2024-07-25 14:11:34.510063] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:45.574 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:45.574 "name": "raid_bdev1", 00:28:45.574 "aliases": [ 00:28:45.574 "1a6bc7c2-afed-4c1f-bdac-949f5680f42a" 00:28:45.574 ], 00:28:45.574 "product_name": "Raid Volume", 00:28:45.574 "block_size": 512, 00:28:45.574 "num_blocks": 63488, 00:28:45.574 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:45.574 "assigned_rate_limits": { 00:28:45.574 "rw_ios_per_sec": 0, 00:28:45.574 "rw_mbytes_per_sec": 0, 00:28:45.574 "r_mbytes_per_sec": 0, 00:28:45.574 "w_mbytes_per_sec": 0 00:28:45.574 }, 00:28:45.574 "claimed": false, 00:28:45.574 "zoned": false, 00:28:45.574 "supported_io_types": { 00:28:45.574 "read": true, 00:28:45.574 "write": true, 00:28:45.574 "unmap": false, 00:28:45.574 "flush": false, 00:28:45.574 "reset": true, 00:28:45.574 "nvme_admin": false, 00:28:45.574 "nvme_io": false, 00:28:45.574 "nvme_io_md": false, 00:28:45.574 "write_zeroes": true, 00:28:45.574 "zcopy": false, 00:28:45.574 "get_zone_info": false, 00:28:45.574 "zone_management": false, 00:28:45.574 "zone_append": false, 00:28:45.574 "compare": false, 00:28:45.574 "compare_and_write": false, 00:28:45.574 "abort": false, 00:28:45.574 "seek_hole": false, 00:28:45.574 "seek_data": false, 00:28:45.574 "copy": false, 00:28:45.574 "nvme_iov_md": false 00:28:45.574 }, 00:28:45.575 "memory_domains": [ 00:28:45.575 { 00:28:45.575 "dma_device_id": "system", 00:28:45.575 "dma_device_type": 1 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.575 "dma_device_type": 2 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "system", 00:28:45.575 "dma_device_type": 1 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.575 "dma_device_type": 2 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "system", 00:28:45.575 "dma_device_type": 1 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.575 "dma_device_type": 2 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "system", 00:28:45.575 "dma_device_type": 1 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.575 "dma_device_type": 2 00:28:45.575 } 00:28:45.575 ], 00:28:45.575 "driver_specific": { 00:28:45.575 "raid": { 00:28:45.575 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:45.575 "strip_size_kb": 0, 00:28:45.575 "state": "online", 00:28:45.575 "raid_level": "raid1", 00:28:45.575 "superblock": true, 00:28:45.575 "num_base_bdevs": 4, 00:28:45.575 "num_base_bdevs_discovered": 4, 00:28:45.575 "num_base_bdevs_operational": 4, 00:28:45.575 "base_bdevs_list": [ 00:28:45.575 { 00:28:45.575 "name": "pt1", 00:28:45.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:45.575 "is_configured": true, 00:28:45.575 "data_offset": 2048, 00:28:45.575 "data_size": 63488 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "name": "pt2", 00:28:45.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:45.575 "is_configured": true, 00:28:45.575 "data_offset": 2048, 00:28:45.575 "data_size": 63488 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "name": "pt3", 00:28:45.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:45.575 "is_configured": true, 00:28:45.575 "data_offset": 2048, 00:28:45.575 "data_size": 63488 00:28:45.575 }, 00:28:45.575 { 00:28:45.575 "name": "pt4", 00:28:45.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:45.575 "is_configured": true, 00:28:45.575 "data_offset": 2048, 00:28:45.575 "data_size": 63488 00:28:45.575 } 00:28:45.575 ] 00:28:45.575 } 00:28:45.575 } 00:28:45.575 }' 00:28:45.575 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:45.575 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:45.575 pt2 00:28:45.575 pt3 00:28:45.575 pt4' 00:28:45.575 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:45.575 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:45.575 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:45.833 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:45.833 "name": "pt1", 00:28:45.833 "aliases": [ 00:28:45.833 "00000000-0000-0000-0000-000000000001" 00:28:45.833 ], 00:28:45.833 "product_name": "passthru", 00:28:45.833 "block_size": 512, 00:28:45.833 "num_blocks": 65536, 00:28:45.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:45.833 "assigned_rate_limits": { 00:28:45.833 "rw_ios_per_sec": 0, 00:28:45.833 "rw_mbytes_per_sec": 0, 00:28:45.833 "r_mbytes_per_sec": 0, 00:28:45.833 "w_mbytes_per_sec": 0 00:28:45.833 }, 00:28:45.833 "claimed": true, 00:28:45.833 "claim_type": "exclusive_write", 00:28:45.833 "zoned": false, 00:28:45.833 "supported_io_types": { 00:28:45.833 "read": true, 00:28:45.833 "write": true, 00:28:45.833 "unmap": true, 00:28:45.833 "flush": true, 00:28:45.833 "reset": true, 00:28:45.833 "nvme_admin": false, 00:28:45.833 "nvme_io": false, 00:28:45.833 "nvme_io_md": false, 00:28:45.833 "write_zeroes": true, 00:28:45.833 "zcopy": true, 00:28:45.833 "get_zone_info": false, 00:28:45.833 "zone_management": false, 00:28:45.833 "zone_append": false, 00:28:45.833 "compare": false, 00:28:45.833 "compare_and_write": false, 00:28:45.833 "abort": true, 00:28:45.833 "seek_hole": false, 00:28:45.833 "seek_data": false, 00:28:45.833 "copy": true, 00:28:45.833 "nvme_iov_md": false 00:28:45.833 }, 00:28:45.833 "memory_domains": [ 00:28:45.833 { 00:28:45.833 "dma_device_id": "system", 00:28:45.834 "dma_device_type": 1 00:28:45.834 }, 00:28:45.834 { 00:28:45.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.834 "dma_device_type": 2 00:28:45.834 } 00:28:45.834 ], 00:28:45.834 "driver_specific": { 00:28:45.834 "passthru": { 00:28:45.834 "name": "pt1", 00:28:45.834 "base_bdev_name": "malloc1" 00:28:45.834 } 00:28:45.834 } 00:28:45.834 }' 00:28:45.834 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:45.834 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.091 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:46.091 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.091 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.091 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:46.091 14:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.091 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.091 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:46.091 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.091 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.349 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:46.349 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:46.349 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:46.349 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:46.606 "name": "pt2", 00:28:46.606 "aliases": [ 00:28:46.606 "00000000-0000-0000-0000-000000000002" 00:28:46.606 ], 00:28:46.606 "product_name": "passthru", 00:28:46.606 "block_size": 512, 00:28:46.606 "num_blocks": 65536, 00:28:46.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:46.606 "assigned_rate_limits": { 00:28:46.606 "rw_ios_per_sec": 0, 00:28:46.606 "rw_mbytes_per_sec": 0, 00:28:46.606 "r_mbytes_per_sec": 0, 00:28:46.606 "w_mbytes_per_sec": 0 00:28:46.606 }, 00:28:46.606 "claimed": true, 00:28:46.606 "claim_type": "exclusive_write", 00:28:46.606 "zoned": false, 00:28:46.606 "supported_io_types": { 00:28:46.606 "read": true, 00:28:46.606 "write": true, 00:28:46.606 "unmap": true, 00:28:46.606 "flush": true, 00:28:46.606 "reset": true, 00:28:46.606 "nvme_admin": false, 00:28:46.606 "nvme_io": false, 00:28:46.606 "nvme_io_md": false, 00:28:46.606 "write_zeroes": true, 00:28:46.606 "zcopy": true, 00:28:46.606 "get_zone_info": false, 00:28:46.606 "zone_management": false, 00:28:46.606 "zone_append": false, 00:28:46.606 "compare": false, 00:28:46.606 "compare_and_write": false, 00:28:46.606 "abort": true, 00:28:46.606 "seek_hole": false, 00:28:46.606 "seek_data": false, 00:28:46.606 "copy": true, 00:28:46.606 "nvme_iov_md": false 00:28:46.606 }, 00:28:46.606 "memory_domains": [ 00:28:46.606 { 00:28:46.606 "dma_device_id": "system", 00:28:46.606 "dma_device_type": 1 00:28:46.606 }, 00:28:46.606 { 00:28:46.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.606 "dma_device_type": 2 00:28:46.606 } 00:28:46.606 ], 00:28:46.606 "driver_specific": { 00:28:46.606 "passthru": { 00:28:46.606 "name": "pt2", 00:28:46.606 "base_bdev_name": "malloc2" 00:28:46.606 } 00:28:46.606 } 00:28:46.606 }' 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.606 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:46.864 14:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:47.121 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:47.121 "name": "pt3", 00:28:47.121 "aliases": [ 00:28:47.121 "00000000-0000-0000-0000-000000000003" 00:28:47.121 ], 00:28:47.121 "product_name": "passthru", 00:28:47.121 "block_size": 512, 00:28:47.121 "num_blocks": 65536, 00:28:47.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:47.121 "assigned_rate_limits": { 00:28:47.121 "rw_ios_per_sec": 0, 00:28:47.121 "rw_mbytes_per_sec": 0, 00:28:47.121 "r_mbytes_per_sec": 0, 00:28:47.121 "w_mbytes_per_sec": 0 00:28:47.121 }, 00:28:47.121 "claimed": true, 00:28:47.121 "claim_type": "exclusive_write", 00:28:47.121 "zoned": false, 00:28:47.121 "supported_io_types": { 00:28:47.121 "read": true, 00:28:47.121 "write": true, 00:28:47.121 "unmap": true, 00:28:47.121 "flush": true, 00:28:47.121 "reset": true, 00:28:47.121 "nvme_admin": false, 00:28:47.121 "nvme_io": false, 00:28:47.121 "nvme_io_md": false, 00:28:47.121 "write_zeroes": true, 00:28:47.121 "zcopy": true, 00:28:47.121 "get_zone_info": false, 00:28:47.121 "zone_management": false, 00:28:47.121 "zone_append": false, 00:28:47.121 "compare": false, 00:28:47.121 "compare_and_write": false, 00:28:47.121 "abort": true, 00:28:47.121 "seek_hole": false, 00:28:47.121 "seek_data": false, 00:28:47.121 "copy": true, 00:28:47.121 "nvme_iov_md": false 00:28:47.121 }, 00:28:47.121 "memory_domains": [ 00:28:47.121 { 00:28:47.121 "dma_device_id": "system", 00:28:47.121 "dma_device_type": 1 00:28:47.121 }, 00:28:47.121 { 00:28:47.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.121 "dma_device_type": 2 00:28:47.121 } 00:28:47.121 ], 00:28:47.121 "driver_specific": { 00:28:47.121 "passthru": { 00:28:47.121 "name": "pt3", 00:28:47.121 "base_bdev_name": "malloc3" 00:28:47.121 } 00:28:47.121 } 00:28:47.121 }' 00:28:47.121 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:47.379 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:47.636 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:28:47.894 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:47.894 "name": "pt4", 00:28:47.894 "aliases": [ 00:28:47.894 "00000000-0000-0000-0000-000000000004" 00:28:47.894 ], 00:28:47.894 "product_name": "passthru", 00:28:47.894 "block_size": 512, 00:28:47.894 "num_blocks": 65536, 00:28:47.894 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:47.894 "assigned_rate_limits": { 00:28:47.894 "rw_ios_per_sec": 0, 00:28:47.894 "rw_mbytes_per_sec": 0, 00:28:47.894 "r_mbytes_per_sec": 0, 00:28:47.894 "w_mbytes_per_sec": 0 00:28:47.894 }, 00:28:47.894 "claimed": true, 00:28:47.894 "claim_type": "exclusive_write", 00:28:47.894 "zoned": false, 00:28:47.894 "supported_io_types": { 00:28:47.894 "read": true, 00:28:47.894 "write": true, 00:28:47.894 "unmap": true, 00:28:47.894 "flush": true, 00:28:47.894 "reset": true, 00:28:47.894 "nvme_admin": false, 00:28:47.894 "nvme_io": false, 00:28:47.894 "nvme_io_md": false, 00:28:47.894 "write_zeroes": true, 00:28:47.894 "zcopy": true, 00:28:47.894 "get_zone_info": false, 00:28:47.894 "zone_management": false, 00:28:47.894 "zone_append": false, 00:28:47.894 "compare": false, 00:28:47.894 "compare_and_write": false, 00:28:47.894 "abort": true, 00:28:47.894 "seek_hole": false, 00:28:47.894 "seek_data": false, 00:28:47.894 "copy": true, 00:28:47.894 "nvme_iov_md": false 00:28:47.894 }, 00:28:47.894 "memory_domains": [ 00:28:47.894 { 00:28:47.894 "dma_device_id": "system", 00:28:47.894 "dma_device_type": 1 00:28:47.894 }, 00:28:47.894 { 00:28:47.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.894 "dma_device_type": 2 00:28:47.894 } 00:28:47.894 ], 00:28:47.894 "driver_specific": { 00:28:47.894 "passthru": { 00:28:47.894 "name": "pt4", 00:28:47.894 "base_bdev_name": "malloc4" 00:28:47.894 } 00:28:47.894 } 00:28:47.894 }' 00:28:47.894 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.894 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:47.894 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:47.894 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.152 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.152 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:48.152 14:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:48.152 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:48.411 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:48.411 [2024-07-25 14:11:37.434883] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.669 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1a6bc7c2-afed-4c1f-bdac-949f5680f42a 00:28:48.669 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1a6bc7c2-afed-4c1f-bdac-949f5680f42a ']' 00:28:48.669 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:48.926 [2024-07-25 14:11:37.718642] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:48.926 [2024-07-25 14:11:37.718821] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:48.926 [2024-07-25 14:11:37.719019] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:48.926 [2024-07-25 14:11:37.719235] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:48.926 [2024-07-25 14:11:37.719356] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:28:48.926 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:48.926 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.183 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:49.183 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:49.183 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.183 14:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:49.440 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.440 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:49.697 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.697 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:49.697 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:49.697 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:49.953 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:49.953 14:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:50.210 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:50.467 [2024-07-25 14:11:39.415460] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:50.467 [2024-07-25 14:11:39.417927] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:50.467 [2024-07-25 14:11:39.418160] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:50.467 [2024-07-25 14:11:39.418366] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:50.467 [2024-07-25 14:11:39.418591] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:50.467 [2024-07-25 14:11:39.418861] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:50.467 [2024-07-25 14:11:39.419057] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:50.467 [2024-07-25 14:11:39.419250] bdev_raid.c:3293:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:28:50.467 [2024-07-25 14:11:39.419411] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:50.467 [2024-07-25 14:11:39.419533] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:28:50.467 request: 00:28:50.467 { 00:28:50.467 "name": "raid_bdev1", 00:28:50.467 "raid_level": "raid1", 00:28:50.467 "base_bdevs": [ 00:28:50.467 "malloc1", 00:28:50.467 "malloc2", 00:28:50.467 "malloc3", 00:28:50.467 "malloc4" 00:28:50.467 ], 00:28:50.467 "superblock": false, 00:28:50.467 "method": "bdev_raid_create", 00:28:50.467 "req_id": 1 00:28:50.467 } 00:28:50.467 Got JSON-RPC error response 00:28:50.467 response: 00:28:50.467 { 00:28:50.467 "code": -17, 00:28:50.467 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:50.467 } 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:50.467 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.725 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:50.725 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:50.725 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:50.982 [2024-07-25 14:11:39.887969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:50.982 [2024-07-25 14:11:39.888302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.982 [2024-07-25 14:11:39.888494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:50.982 [2024-07-25 14:11:39.888692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.982 [2024-07-25 14:11:39.891388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.982 [2024-07-25 14:11:39.891566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:50.982 [2024-07-25 14:11:39.891799] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:50.982 [2024-07-25 14:11:39.891969] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:50.982 pt1 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.982 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.983 14:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.240 14:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:51.240 "name": "raid_bdev1", 00:28:51.240 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:51.240 "strip_size_kb": 0, 00:28:51.240 "state": "configuring", 00:28:51.240 "raid_level": "raid1", 00:28:51.240 "superblock": true, 00:28:51.240 "num_base_bdevs": 4, 00:28:51.240 "num_base_bdevs_discovered": 1, 00:28:51.240 "num_base_bdevs_operational": 4, 00:28:51.240 "base_bdevs_list": [ 00:28:51.240 { 00:28:51.240 "name": "pt1", 00:28:51.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:51.240 "is_configured": true, 00:28:51.240 "data_offset": 2048, 00:28:51.240 "data_size": 63488 00:28:51.240 }, 00:28:51.240 { 00:28:51.240 "name": null, 00:28:51.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:51.240 "is_configured": false, 00:28:51.240 "data_offset": 2048, 00:28:51.240 "data_size": 63488 00:28:51.240 }, 00:28:51.240 { 00:28:51.240 "name": null, 00:28:51.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:51.240 "is_configured": false, 00:28:51.240 "data_offset": 2048, 00:28:51.240 "data_size": 63488 00:28:51.240 }, 00:28:51.240 { 00:28:51.240 "name": null, 00:28:51.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:51.240 "is_configured": false, 00:28:51.240 "data_offset": 2048, 00:28:51.240 "data_size": 63488 00:28:51.240 } 00:28:51.240 ] 00:28:51.240 }' 00:28:51.240 14:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:51.240 14:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.805 14:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:28:51.805 14:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:52.062 [2024-07-25 14:11:41.016544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:52.062 [2024-07-25 14:11:41.016834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.062 [2024-07-25 14:11:41.016928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:52.062 [2024-07-25 14:11:41.017199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.062 [2024-07-25 14:11:41.017882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.062 [2024-07-25 14:11:41.018053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:52.062 [2024-07-25 14:11:41.018282] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:52.062 [2024-07-25 14:11:41.018416] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:52.062 pt2 00:28:52.062 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:52.320 [2024-07-25 14:11:41.244659] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.320 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.578 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:52.578 "name": "raid_bdev1", 00:28:52.578 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:52.578 "strip_size_kb": 0, 00:28:52.578 "state": "configuring", 00:28:52.578 "raid_level": "raid1", 00:28:52.578 "superblock": true, 00:28:52.578 "num_base_bdevs": 4, 00:28:52.578 "num_base_bdevs_discovered": 1, 00:28:52.578 "num_base_bdevs_operational": 4, 00:28:52.578 "base_bdevs_list": [ 00:28:52.578 { 00:28:52.578 "name": "pt1", 00:28:52.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:52.578 "is_configured": true, 00:28:52.578 "data_offset": 2048, 00:28:52.578 "data_size": 63488 00:28:52.578 }, 00:28:52.578 { 00:28:52.578 "name": null, 00:28:52.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:52.578 "is_configured": false, 00:28:52.578 "data_offset": 2048, 00:28:52.578 "data_size": 63488 00:28:52.578 }, 00:28:52.578 { 00:28:52.578 "name": null, 00:28:52.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:52.578 "is_configured": false, 00:28:52.578 "data_offset": 2048, 00:28:52.578 "data_size": 63488 00:28:52.578 }, 00:28:52.578 { 00:28:52.578 "name": null, 00:28:52.578 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:52.578 "is_configured": false, 00:28:52.578 "data_offset": 2048, 00:28:52.578 "data_size": 63488 00:28:52.578 } 00:28:52.578 ] 00:28:52.578 }' 00:28:52.578 14:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:52.578 14:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.144 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:53.144 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:53.144 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:53.401 [2024-07-25 14:11:42.368888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:53.401 [2024-07-25 14:11:42.369152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.401 [2024-07-25 14:11:42.369347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:53.401 [2024-07-25 14:11:42.369539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.401 [2024-07-25 14:11:42.370216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.401 [2024-07-25 14:11:42.370386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:53.401 [2024-07-25 14:11:42.370643] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:53.401 [2024-07-25 14:11:42.370788] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:53.401 pt2 00:28:53.401 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:53.401 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:53.401 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:53.658 [2024-07-25 14:11:42.656974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:53.658 [2024-07-25 14:11:42.657258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.658 [2024-07-25 14:11:42.657418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:53.658 [2024-07-25 14:11:42.657587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.658 [2024-07-25 14:11:42.658244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.658 [2024-07-25 14:11:42.658420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:53.658 [2024-07-25 14:11:42.658643] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:53.658 [2024-07-25 14:11:42.658775] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:53.658 pt3 00:28:53.658 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:53.658 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:53.658 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:53.916 [2024-07-25 14:11:42.945010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:53.916 [2024-07-25 14:11:42.945277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.916 [2024-07-25 14:11:42.945357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:53.916 [2024-07-25 14:11:42.945609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.916 [2024-07-25 14:11:42.946195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.916 [2024-07-25 14:11:42.946369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:53.916 [2024-07-25 14:11:42.946596] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:53.916 [2024-07-25 14:11:42.946734] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:53.916 [2024-07-25 14:11:42.946961] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:28:53.916 [2024-07-25 14:11:42.947077] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:53.916 [2024-07-25 14:11:42.947228] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:53.916 [2024-07-25 14:11:42.947719] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:28:53.916 [2024-07-25 14:11:42.947850] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:28:53.916 [2024-07-25 14:11:42.948102] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.916 pt4 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.231 14:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.231 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:54.231 "name": "raid_bdev1", 00:28:54.231 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:54.231 "strip_size_kb": 0, 00:28:54.231 "state": "online", 00:28:54.231 "raid_level": "raid1", 00:28:54.231 "superblock": true, 00:28:54.231 "num_base_bdevs": 4, 00:28:54.231 "num_base_bdevs_discovered": 4, 00:28:54.231 "num_base_bdevs_operational": 4, 00:28:54.231 "base_bdevs_list": [ 00:28:54.231 { 00:28:54.231 "name": "pt1", 00:28:54.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:54.231 "is_configured": true, 00:28:54.231 "data_offset": 2048, 00:28:54.231 "data_size": 63488 00:28:54.231 }, 00:28:54.231 { 00:28:54.231 "name": "pt2", 00:28:54.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:54.231 "is_configured": true, 00:28:54.231 "data_offset": 2048, 00:28:54.231 "data_size": 63488 00:28:54.231 }, 00:28:54.231 { 00:28:54.231 "name": "pt3", 00:28:54.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:54.231 "is_configured": true, 00:28:54.231 "data_offset": 2048, 00:28:54.231 "data_size": 63488 00:28:54.231 }, 00:28:54.231 { 00:28:54.231 "name": "pt4", 00:28:54.231 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:54.231 "is_configured": true, 00:28:54.231 "data_offset": 2048, 00:28:54.231 "data_size": 63488 00:28:54.231 } 00:28:54.231 ] 00:28:54.231 }' 00:28:54.231 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:54.231 14:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:54.811 14:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:55.067 [2024-07-25 14:11:44.045651] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:55.067 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:55.067 "name": "raid_bdev1", 00:28:55.068 "aliases": [ 00:28:55.068 "1a6bc7c2-afed-4c1f-bdac-949f5680f42a" 00:28:55.068 ], 00:28:55.068 "product_name": "Raid Volume", 00:28:55.068 "block_size": 512, 00:28:55.068 "num_blocks": 63488, 00:28:55.068 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:55.068 "assigned_rate_limits": { 00:28:55.068 "rw_ios_per_sec": 0, 00:28:55.068 "rw_mbytes_per_sec": 0, 00:28:55.068 "r_mbytes_per_sec": 0, 00:28:55.068 "w_mbytes_per_sec": 0 00:28:55.068 }, 00:28:55.068 "claimed": false, 00:28:55.068 "zoned": false, 00:28:55.068 "supported_io_types": { 00:28:55.068 "read": true, 00:28:55.068 "write": true, 00:28:55.068 "unmap": false, 00:28:55.068 "flush": false, 00:28:55.068 "reset": true, 00:28:55.068 "nvme_admin": false, 00:28:55.068 "nvme_io": false, 00:28:55.068 "nvme_io_md": false, 00:28:55.068 "write_zeroes": true, 00:28:55.068 "zcopy": false, 00:28:55.068 "get_zone_info": false, 00:28:55.068 "zone_management": false, 00:28:55.068 "zone_append": false, 00:28:55.068 "compare": false, 00:28:55.068 "compare_and_write": false, 00:28:55.068 "abort": false, 00:28:55.068 "seek_hole": false, 00:28:55.068 "seek_data": false, 00:28:55.068 "copy": false, 00:28:55.068 "nvme_iov_md": false 00:28:55.068 }, 00:28:55.068 "memory_domains": [ 00:28:55.068 { 00:28:55.068 "dma_device_id": "system", 00:28:55.068 "dma_device_type": 1 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.068 "dma_device_type": 2 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "system", 00:28:55.068 "dma_device_type": 1 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.068 "dma_device_type": 2 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "system", 00:28:55.068 "dma_device_type": 1 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.068 "dma_device_type": 2 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "system", 00:28:55.068 "dma_device_type": 1 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.068 "dma_device_type": 2 00:28:55.068 } 00:28:55.068 ], 00:28:55.068 "driver_specific": { 00:28:55.068 "raid": { 00:28:55.068 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:55.068 "strip_size_kb": 0, 00:28:55.068 "state": "online", 00:28:55.068 "raid_level": "raid1", 00:28:55.068 "superblock": true, 00:28:55.068 "num_base_bdevs": 4, 00:28:55.068 "num_base_bdevs_discovered": 4, 00:28:55.068 "num_base_bdevs_operational": 4, 00:28:55.068 "base_bdevs_list": [ 00:28:55.068 { 00:28:55.068 "name": "pt1", 00:28:55.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:55.068 "is_configured": true, 00:28:55.068 "data_offset": 2048, 00:28:55.068 "data_size": 63488 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "name": "pt2", 00:28:55.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:55.068 "is_configured": true, 00:28:55.068 "data_offset": 2048, 00:28:55.068 "data_size": 63488 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "name": "pt3", 00:28:55.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:55.068 "is_configured": true, 00:28:55.068 "data_offset": 2048, 00:28:55.068 "data_size": 63488 00:28:55.068 }, 00:28:55.068 { 00:28:55.068 "name": "pt4", 00:28:55.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:55.068 "is_configured": true, 00:28:55.068 "data_offset": 2048, 00:28:55.068 "data_size": 63488 00:28:55.068 } 00:28:55.068 ] 00:28:55.068 } 00:28:55.068 } 00:28:55.068 }' 00:28:55.068 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:55.068 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:55.068 pt2 00:28:55.068 pt3 00:28:55.068 pt4' 00:28:55.068 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:55.325 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:55.325 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:55.583 "name": "pt1", 00:28:55.583 "aliases": [ 00:28:55.583 "00000000-0000-0000-0000-000000000001" 00:28:55.583 ], 00:28:55.583 "product_name": "passthru", 00:28:55.583 "block_size": 512, 00:28:55.583 "num_blocks": 65536, 00:28:55.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:55.583 "assigned_rate_limits": { 00:28:55.583 "rw_ios_per_sec": 0, 00:28:55.583 "rw_mbytes_per_sec": 0, 00:28:55.583 "r_mbytes_per_sec": 0, 00:28:55.583 "w_mbytes_per_sec": 0 00:28:55.583 }, 00:28:55.583 "claimed": true, 00:28:55.583 "claim_type": "exclusive_write", 00:28:55.583 "zoned": false, 00:28:55.583 "supported_io_types": { 00:28:55.583 "read": true, 00:28:55.583 "write": true, 00:28:55.583 "unmap": true, 00:28:55.583 "flush": true, 00:28:55.583 "reset": true, 00:28:55.583 "nvme_admin": false, 00:28:55.583 "nvme_io": false, 00:28:55.583 "nvme_io_md": false, 00:28:55.583 "write_zeroes": true, 00:28:55.583 "zcopy": true, 00:28:55.583 "get_zone_info": false, 00:28:55.583 "zone_management": false, 00:28:55.583 "zone_append": false, 00:28:55.583 "compare": false, 00:28:55.583 "compare_and_write": false, 00:28:55.583 "abort": true, 00:28:55.583 "seek_hole": false, 00:28:55.583 "seek_data": false, 00:28:55.583 "copy": true, 00:28:55.583 "nvme_iov_md": false 00:28:55.583 }, 00:28:55.583 "memory_domains": [ 00:28:55.583 { 00:28:55.583 "dma_device_id": "system", 00:28:55.583 "dma_device_type": 1 00:28:55.583 }, 00:28:55.583 { 00:28:55.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.583 "dma_device_type": 2 00:28:55.583 } 00:28:55.583 ], 00:28:55.583 "driver_specific": { 00:28:55.583 "passthru": { 00:28:55.583 "name": "pt1", 00:28:55.583 "base_bdev_name": "malloc1" 00:28:55.583 } 00:28:55.583 } 00:28:55.583 }' 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:55.583 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:55.841 14:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:56.098 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:56.098 "name": "pt2", 00:28:56.098 "aliases": [ 00:28:56.099 "00000000-0000-0000-0000-000000000002" 00:28:56.099 ], 00:28:56.099 "product_name": "passthru", 00:28:56.099 "block_size": 512, 00:28:56.099 "num_blocks": 65536, 00:28:56.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:56.099 "assigned_rate_limits": { 00:28:56.099 "rw_ios_per_sec": 0, 00:28:56.099 "rw_mbytes_per_sec": 0, 00:28:56.099 "r_mbytes_per_sec": 0, 00:28:56.099 "w_mbytes_per_sec": 0 00:28:56.099 }, 00:28:56.099 "claimed": true, 00:28:56.099 "claim_type": "exclusive_write", 00:28:56.099 "zoned": false, 00:28:56.099 "supported_io_types": { 00:28:56.099 "read": true, 00:28:56.099 "write": true, 00:28:56.099 "unmap": true, 00:28:56.099 "flush": true, 00:28:56.099 "reset": true, 00:28:56.099 "nvme_admin": false, 00:28:56.099 "nvme_io": false, 00:28:56.099 "nvme_io_md": false, 00:28:56.099 "write_zeroes": true, 00:28:56.099 "zcopy": true, 00:28:56.099 "get_zone_info": false, 00:28:56.099 "zone_management": false, 00:28:56.099 "zone_append": false, 00:28:56.099 "compare": false, 00:28:56.099 "compare_and_write": false, 00:28:56.099 "abort": true, 00:28:56.099 "seek_hole": false, 00:28:56.099 "seek_data": false, 00:28:56.099 "copy": true, 00:28:56.099 "nvme_iov_md": false 00:28:56.099 }, 00:28:56.099 "memory_domains": [ 00:28:56.099 { 00:28:56.099 "dma_device_id": "system", 00:28:56.099 "dma_device_type": 1 00:28:56.099 }, 00:28:56.099 { 00:28:56.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.099 "dma_device_type": 2 00:28:56.099 } 00:28:56.099 ], 00:28:56.099 "driver_specific": { 00:28:56.099 "passthru": { 00:28:56.099 "name": "pt2", 00:28:56.099 "base_bdev_name": "malloc2" 00:28:56.099 } 00:28:56.099 } 00:28:56.099 }' 00:28:56.099 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.099 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.099 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:56.099 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:56.357 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:56.614 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:56.614 "name": "pt3", 00:28:56.614 "aliases": [ 00:28:56.614 "00000000-0000-0000-0000-000000000003" 00:28:56.614 ], 00:28:56.614 "product_name": "passthru", 00:28:56.614 "block_size": 512, 00:28:56.614 "num_blocks": 65536, 00:28:56.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:56.614 "assigned_rate_limits": { 00:28:56.614 "rw_ios_per_sec": 0, 00:28:56.614 "rw_mbytes_per_sec": 0, 00:28:56.614 "r_mbytes_per_sec": 0, 00:28:56.614 "w_mbytes_per_sec": 0 00:28:56.614 }, 00:28:56.614 "claimed": true, 00:28:56.614 "claim_type": "exclusive_write", 00:28:56.615 "zoned": false, 00:28:56.615 "supported_io_types": { 00:28:56.615 "read": true, 00:28:56.615 "write": true, 00:28:56.615 "unmap": true, 00:28:56.615 "flush": true, 00:28:56.615 "reset": true, 00:28:56.615 "nvme_admin": false, 00:28:56.615 "nvme_io": false, 00:28:56.615 "nvme_io_md": false, 00:28:56.615 "write_zeroes": true, 00:28:56.615 "zcopy": true, 00:28:56.615 "get_zone_info": false, 00:28:56.615 "zone_management": false, 00:28:56.615 "zone_append": false, 00:28:56.615 "compare": false, 00:28:56.615 "compare_and_write": false, 00:28:56.615 "abort": true, 00:28:56.615 "seek_hole": false, 00:28:56.615 "seek_data": false, 00:28:56.615 "copy": true, 00:28:56.615 "nvme_iov_md": false 00:28:56.615 }, 00:28:56.615 "memory_domains": [ 00:28:56.615 { 00:28:56.615 "dma_device_id": "system", 00:28:56.615 "dma_device_type": 1 00:28:56.615 }, 00:28:56.615 { 00:28:56.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.615 "dma_device_type": 2 00:28:56.615 } 00:28:56.615 ], 00:28:56.615 "driver_specific": { 00:28:56.615 "passthru": { 00:28:56.615 "name": "pt3", 00:28:56.615 "base_bdev_name": "malloc3" 00:28:56.615 } 00:28:56.615 } 00:28:56.615 }' 00:28:56.615 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:56.873 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.131 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.131 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:57.131 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:57.131 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:28:57.131 14:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:57.389 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:57.389 "name": "pt4", 00:28:57.389 "aliases": [ 00:28:57.389 "00000000-0000-0000-0000-000000000004" 00:28:57.389 ], 00:28:57.389 "product_name": "passthru", 00:28:57.389 "block_size": 512, 00:28:57.389 "num_blocks": 65536, 00:28:57.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:57.389 "assigned_rate_limits": { 00:28:57.389 "rw_ios_per_sec": 0, 00:28:57.389 "rw_mbytes_per_sec": 0, 00:28:57.389 "r_mbytes_per_sec": 0, 00:28:57.389 "w_mbytes_per_sec": 0 00:28:57.389 }, 00:28:57.389 "claimed": true, 00:28:57.389 "claim_type": "exclusive_write", 00:28:57.389 "zoned": false, 00:28:57.389 "supported_io_types": { 00:28:57.389 "read": true, 00:28:57.389 "write": true, 00:28:57.389 "unmap": true, 00:28:57.389 "flush": true, 00:28:57.389 "reset": true, 00:28:57.389 "nvme_admin": false, 00:28:57.389 "nvme_io": false, 00:28:57.389 "nvme_io_md": false, 00:28:57.389 "write_zeroes": true, 00:28:57.389 "zcopy": true, 00:28:57.389 "get_zone_info": false, 00:28:57.389 "zone_management": false, 00:28:57.389 "zone_append": false, 00:28:57.389 "compare": false, 00:28:57.389 "compare_and_write": false, 00:28:57.389 "abort": true, 00:28:57.389 "seek_hole": false, 00:28:57.389 "seek_data": false, 00:28:57.389 "copy": true, 00:28:57.389 "nvme_iov_md": false 00:28:57.389 }, 00:28:57.389 "memory_domains": [ 00:28:57.389 { 00:28:57.389 "dma_device_id": "system", 00:28:57.389 "dma_device_type": 1 00:28:57.389 }, 00:28:57.389 { 00:28:57.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.389 "dma_device_type": 2 00:28:57.389 } 00:28:57.389 ], 00:28:57.389 "driver_specific": { 00:28:57.389 "passthru": { 00:28:57.389 "name": "pt4", 00:28:57.389 "base_bdev_name": "malloc4" 00:28:57.389 } 00:28:57.389 } 00:28:57.389 }' 00:28:57.389 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:57.389 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:57.389 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:57.389 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:57.647 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:57.905 [2024-07-25 14:11:46.906266] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:57.905 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1a6bc7c2-afed-4c1f-bdac-949f5680f42a '!=' 1a6bc7c2-afed-4c1f-bdac-949f5680f42a ']' 00:28:57.905 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:28:57.905 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:57.905 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:28:57.905 14:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:58.472 [2024-07-25 14:11:47.230096] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.472 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.760 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.760 "name": "raid_bdev1", 00:28:58.761 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:28:58.761 "strip_size_kb": 0, 00:28:58.761 "state": "online", 00:28:58.761 "raid_level": "raid1", 00:28:58.761 "superblock": true, 00:28:58.761 "num_base_bdevs": 4, 00:28:58.761 "num_base_bdevs_discovered": 3, 00:28:58.761 "num_base_bdevs_operational": 3, 00:28:58.761 "base_bdevs_list": [ 00:28:58.761 { 00:28:58.761 "name": null, 00:28:58.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.761 "is_configured": false, 00:28:58.761 "data_offset": 2048, 00:28:58.761 "data_size": 63488 00:28:58.761 }, 00:28:58.761 { 00:28:58.761 "name": "pt2", 00:28:58.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:58.761 "is_configured": true, 00:28:58.761 "data_offset": 2048, 00:28:58.761 "data_size": 63488 00:28:58.761 }, 00:28:58.761 { 00:28:58.761 "name": "pt3", 00:28:58.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:58.761 "is_configured": true, 00:28:58.761 "data_offset": 2048, 00:28:58.761 "data_size": 63488 00:28:58.761 }, 00:28:58.761 { 00:28:58.761 "name": "pt4", 00:28:58.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:58.761 "is_configured": true, 00:28:58.761 "data_offset": 2048, 00:28:58.761 "data_size": 63488 00:28:58.761 } 00:28:58.761 ] 00:28:58.761 }' 00:28:58.761 14:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.761 14:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.337 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:59.596 [2024-07-25 14:11:48.390277] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:59.596 [2024-07-25 14:11:48.390459] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:59.596 [2024-07-25 14:11:48.390660] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:59.596 [2024-07-25 14:11:48.390867] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:59.596 [2024-07-25 14:11:48.390993] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:28:59.596 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.596 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:59.853 14:11:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:00.111 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:00.111 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:00.111 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:00.368 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:00.368 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:00.368 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:29:00.368 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:00.368 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:00.626 [2024-07-25 14:11:49.582503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:00.626 [2024-07-25 14:11:49.582768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.626 [2024-07-25 14:11:49.582921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:00.626 [2024-07-25 14:11:49.583071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.626 [2024-07-25 14:11:49.585715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.626 [2024-07-25 14:11:49.585915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:00.626 [2024-07-25 14:11:49.586159] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:00.626 [2024-07-25 14:11:49.586341] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:00.626 pt2 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.626 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.883 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.883 "name": "raid_bdev1", 00:29:00.883 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:29:00.883 "strip_size_kb": 0, 00:29:00.883 "state": "configuring", 00:29:00.883 "raid_level": "raid1", 00:29:00.883 "superblock": true, 00:29:00.883 "num_base_bdevs": 4, 00:29:00.883 "num_base_bdevs_discovered": 1, 00:29:00.883 "num_base_bdevs_operational": 3, 00:29:00.883 "base_bdevs_list": [ 00:29:00.883 { 00:29:00.883 "name": null, 00:29:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.883 "is_configured": false, 00:29:00.883 "data_offset": 2048, 00:29:00.883 "data_size": 63488 00:29:00.883 }, 00:29:00.883 { 00:29:00.883 "name": "pt2", 00:29:00.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:00.883 "is_configured": true, 00:29:00.883 "data_offset": 2048, 00:29:00.883 "data_size": 63488 00:29:00.883 }, 00:29:00.883 { 00:29:00.883 "name": null, 00:29:00.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:00.883 "is_configured": false, 00:29:00.883 "data_offset": 2048, 00:29:00.883 "data_size": 63488 00:29:00.883 }, 00:29:00.883 { 00:29:00.883 "name": null, 00:29:00.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:00.883 "is_configured": false, 00:29:00.883 "data_offset": 2048, 00:29:00.883 "data_size": 63488 00:29:00.883 } 00:29:00.883 ] 00:29:00.883 }' 00:29:00.883 14:11:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.883 14:11:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:01.855 [2024-07-25 14:11:50.806970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:01.855 [2024-07-25 14:11:50.807231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.855 [2024-07-25 14:11:50.807413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:01.855 [2024-07-25 14:11:50.807571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.855 [2024-07-25 14:11:50.808253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.855 [2024-07-25 14:11:50.808415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:01.855 [2024-07-25 14:11:50.808654] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:01.855 [2024-07-25 14:11:50.808787] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:01.855 pt3 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.855 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.112 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.112 "name": "raid_bdev1", 00:29:02.112 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:29:02.112 "strip_size_kb": 0, 00:29:02.112 "state": "configuring", 00:29:02.112 "raid_level": "raid1", 00:29:02.112 "superblock": true, 00:29:02.112 "num_base_bdevs": 4, 00:29:02.112 "num_base_bdevs_discovered": 2, 00:29:02.112 "num_base_bdevs_operational": 3, 00:29:02.112 "base_bdevs_list": [ 00:29:02.112 { 00:29:02.112 "name": null, 00:29:02.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.112 "is_configured": false, 00:29:02.112 "data_offset": 2048, 00:29:02.112 "data_size": 63488 00:29:02.112 }, 00:29:02.112 { 00:29:02.112 "name": "pt2", 00:29:02.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.112 "is_configured": true, 00:29:02.112 "data_offset": 2048, 00:29:02.112 "data_size": 63488 00:29:02.112 }, 00:29:02.112 { 00:29:02.112 "name": "pt3", 00:29:02.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:02.112 "is_configured": true, 00:29:02.112 "data_offset": 2048, 00:29:02.112 "data_size": 63488 00:29:02.112 }, 00:29:02.112 { 00:29:02.112 "name": null, 00:29:02.112 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:02.112 "is_configured": false, 00:29:02.112 "data_offset": 2048, 00:29:02.112 "data_size": 63488 00:29:02.112 } 00:29:02.112 ] 00:29:02.112 }' 00:29:02.112 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.112 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.678 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:29:02.678 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:02.678 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:29:02.678 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:02.934 [2024-07-25 14:11:51.919218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:02.934 [2024-07-25 14:11:51.919518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.934 [2024-07-25 14:11:51.919614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:02.934 [2024-07-25 14:11:51.919871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.934 [2024-07-25 14:11:51.920461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.934 [2024-07-25 14:11:51.920642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:02.934 [2024-07-25 14:11:51.920881] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:02.934 [2024-07-25 14:11:51.921039] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:02.934 [2024-07-25 14:11:51.921341] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:29:02.934 [2024-07-25 14:11:51.921471] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:02.934 [2024-07-25 14:11:51.921623] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:02.934 [2024-07-25 14:11:51.922127] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:29:02.934 [2024-07-25 14:11:51.922260] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:29:02.934 [2024-07-25 14:11:51.922520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:02.934 pt4 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.934 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.497 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.497 "name": "raid_bdev1", 00:29:03.497 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:29:03.497 "strip_size_kb": 0, 00:29:03.497 "state": "online", 00:29:03.497 "raid_level": "raid1", 00:29:03.497 "superblock": true, 00:29:03.497 "num_base_bdevs": 4, 00:29:03.497 "num_base_bdevs_discovered": 3, 00:29:03.497 "num_base_bdevs_operational": 3, 00:29:03.497 "base_bdevs_list": [ 00:29:03.497 { 00:29:03.497 "name": null, 00:29:03.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.497 "is_configured": false, 00:29:03.497 "data_offset": 2048, 00:29:03.497 "data_size": 63488 00:29:03.497 }, 00:29:03.497 { 00:29:03.497 "name": "pt2", 00:29:03.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.497 "is_configured": true, 00:29:03.497 "data_offset": 2048, 00:29:03.497 "data_size": 63488 00:29:03.497 }, 00:29:03.497 { 00:29:03.497 "name": "pt3", 00:29:03.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.497 "is_configured": true, 00:29:03.497 "data_offset": 2048, 00:29:03.497 "data_size": 63488 00:29:03.497 }, 00:29:03.497 { 00:29:03.497 "name": "pt4", 00:29:03.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:03.497 "is_configured": true, 00:29:03.497 "data_offset": 2048, 00:29:03.497 "data_size": 63488 00:29:03.497 } 00:29:03.497 ] 00:29:03.497 }' 00:29:03.497 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.497 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.062 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:04.320 [2024-07-25 14:11:53.182312] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:04.320 [2024-07-25 14:11:53.182563] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:04.320 [2024-07-25 14:11:53.182762] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:04.320 [2024-07-25 14:11:53.182957] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:04.320 [2024-07-25 14:11:53.183086] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:29:04.320 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.320 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:29:04.576 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:29:04.576 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:29:04.576 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:29:04.576 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:29:04.576 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:04.833 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:05.091 [2024-07-25 14:11:54.066464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:05.091 [2024-07-25 14:11:54.066768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.091 [2024-07-25 14:11:54.066951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:29:05.091 [2024-07-25 14:11:54.067120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.091 [2024-07-25 14:11:54.069874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.091 [2024-07-25 14:11:54.070070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:05.091 [2024-07-25 14:11:54.070314] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:05.091 [2024-07-25 14:11:54.070486] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:05.091 [2024-07-25 14:11:54.070792] bdev_raid.c:3743:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:05.091 [2024-07-25 14:11:54.070924] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:05.091 [2024-07-25 14:11:54.071050] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:29:05.091 [2024-07-25 14:11:54.071228] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:05.091 [2024-07-25 14:11:54.071523] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:05.091 pt1 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.091 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.348 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.348 "name": "raid_bdev1", 00:29:05.348 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:29:05.348 "strip_size_kb": 0, 00:29:05.348 "state": "configuring", 00:29:05.348 "raid_level": "raid1", 00:29:05.348 "superblock": true, 00:29:05.348 "num_base_bdevs": 4, 00:29:05.348 "num_base_bdevs_discovered": 2, 00:29:05.348 "num_base_bdevs_operational": 3, 00:29:05.348 "base_bdevs_list": [ 00:29:05.348 { 00:29:05.348 "name": null, 00:29:05.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.348 "is_configured": false, 00:29:05.348 "data_offset": 2048, 00:29:05.348 "data_size": 63488 00:29:05.348 }, 00:29:05.348 { 00:29:05.348 "name": "pt2", 00:29:05.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:05.348 "is_configured": true, 00:29:05.348 "data_offset": 2048, 00:29:05.348 "data_size": 63488 00:29:05.348 }, 00:29:05.348 { 00:29:05.348 "name": "pt3", 00:29:05.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:05.348 "is_configured": true, 00:29:05.348 "data_offset": 2048, 00:29:05.348 "data_size": 63488 00:29:05.348 }, 00:29:05.348 { 00:29:05.348 "name": null, 00:29:05.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:05.348 "is_configured": false, 00:29:05.348 "data_offset": 2048, 00:29:05.348 "data_size": 63488 00:29:05.348 } 00:29:05.348 ] 00:29:05.348 }' 00:29:05.348 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.348 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.278 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:06.278 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:06.278 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:29:06.278 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:06.536 [2024-07-25 14:11:55.535247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:06.536 [2024-07-25 14:11:55.535499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:06.536 [2024-07-25 14:11:55.535677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:06.536 [2024-07-25 14:11:55.535849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:06.536 [2024-07-25 14:11:55.536544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:06.536 [2024-07-25 14:11:55.536726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:06.536 [2024-07-25 14:11:55.536947] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:06.536 [2024-07-25 14:11:55.537087] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:06.536 [2024-07-25 14:11:55.537348] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:29:06.536 [2024-07-25 14:11:55.537475] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:06.536 [2024-07-25 14:11:55.537627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:29:06.536 [2024-07-25 14:11:55.538133] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:29:06.536 [2024-07-25 14:11:55.538264] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:29:06.536 [2024-07-25 14:11:55.538542] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.536 pt4 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.536 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.794 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.794 "name": "raid_bdev1", 00:29:06.794 "uuid": "1a6bc7c2-afed-4c1f-bdac-949f5680f42a", 00:29:06.794 "strip_size_kb": 0, 00:29:06.794 "state": "online", 00:29:06.794 "raid_level": "raid1", 00:29:06.794 "superblock": true, 00:29:06.794 "num_base_bdevs": 4, 00:29:06.794 "num_base_bdevs_discovered": 3, 00:29:06.794 "num_base_bdevs_operational": 3, 00:29:06.794 "base_bdevs_list": [ 00:29:06.794 { 00:29:06.794 "name": null, 00:29:06.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.794 "is_configured": false, 00:29:06.794 "data_offset": 2048, 00:29:06.794 "data_size": 63488 00:29:06.794 }, 00:29:06.794 { 00:29:06.794 "name": "pt2", 00:29:06.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:06.794 "is_configured": true, 00:29:06.794 "data_offset": 2048, 00:29:06.794 "data_size": 63488 00:29:06.794 }, 00:29:06.794 { 00:29:06.794 "name": "pt3", 00:29:06.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:06.794 "is_configured": true, 00:29:06.794 "data_offset": 2048, 00:29:06.794 "data_size": 63488 00:29:06.794 }, 00:29:06.794 { 00:29:06.794 "name": "pt4", 00:29:06.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:06.794 "is_configured": true, 00:29:06.794 "data_offset": 2048, 00:29:06.794 "data_size": 63488 00:29:06.794 } 00:29:06.794 ] 00:29:06.794 }' 00:29:06.794 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.794 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.726 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:07.726 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:07.726 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:29:07.726 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:29:07.726 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:07.984 [2024-07-25 14:11:56.895831] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 1a6bc7c2-afed-4c1f-bdac-949f5680f42a '!=' 1a6bc7c2-afed-4c1f-bdac-949f5680f42a ']' 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 142993 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 142993 ']' 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 142993 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142993 00:29:07.984 killing process with pid 142993 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142993' 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 142993 00:29:07.984 14:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 142993 00:29:07.984 [2024-07-25 14:11:56.934365] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:07.984 [2024-07-25 14:11:56.934450] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:07.984 [2024-07-25 14:11:56.934533] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:07.984 [2024-07-25 14:11:56.934545] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:29:08.243 [2024-07-25 14:11:57.259091] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:09.614 ************************************ 00:29:09.614 END TEST raid_superblock_test 00:29:09.614 ************************************ 00:29:09.614 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:29:09.614 00:29:09.614 real 0m28.574s 00:29:09.614 user 0m53.047s 00:29:09.614 sys 0m3.278s 00:29:09.614 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.614 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.614 14:11:58 bdev_raid -- bdev/bdev_raid.sh@1024 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:29:09.614 14:11:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:09.614 14:11:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.614 14:11:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:09.614 ************************************ 00:29:09.614 START TEST raid_read_error_test 00:29:09.614 ************************************ 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=read 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.pMMMS3wN99 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=143862 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 143862 /var/tmp/spdk-raid.sock 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 143862 ']' 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:09.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:09.614 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.614 [2024-07-25 14:11:58.509412] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:29:09.614 [2024-07-25 14:11:58.510407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143862 ] 00:29:09.872 [2024-07-25 14:11:58.679179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.130 [2024-07-25 14:11:58.925697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.130 [2024-07-25 14:11:59.123041] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.694 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.694 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:29:10.694 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:10.694 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:10.953 BaseBdev1_malloc 00:29:10.953 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:11.211 true 00:29:11.211 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:11.777 [2024-07-25 14:12:00.545434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:11.777 [2024-07-25 14:12:00.545774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.777 [2024-07-25 14:12:00.546008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:11.777 [2024-07-25 14:12:00.546203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.777 [2024-07-25 14:12:00.549102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.777 [2024-07-25 14:12:00.549288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:11.777 BaseBdev1 00:29:11.777 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:11.777 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:12.035 BaseBdev2_malloc 00:29:12.035 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:12.292 true 00:29:12.292 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:12.550 [2024-07-25 14:12:01.416564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:12.550 [2024-07-25 14:12:01.416882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.550 [2024-07-25 14:12:01.417119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:12.550 [2024-07-25 14:12:01.417315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.550 [2024-07-25 14:12:01.420122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.550 [2024-07-25 14:12:01.420304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:12.550 BaseBdev2 00:29:12.550 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:12.550 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:12.808 BaseBdev3_malloc 00:29:12.808 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:13.065 true 00:29:13.066 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:13.324 [2024-07-25 14:12:02.223366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:13.324 [2024-07-25 14:12:02.223696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.324 [2024-07-25 14:12:02.223940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:13.324 [2024-07-25 14:12:02.224143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.324 [2024-07-25 14:12:02.226979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.324 [2024-07-25 14:12:02.227163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:13.324 BaseBdev3 00:29:13.324 14:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:13.324 14:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:13.582 BaseBdev4_malloc 00:29:13.582 14:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:13.840 true 00:29:13.840 14:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:14.098 [2024-07-25 14:12:03.030470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:14.098 [2024-07-25 14:12:03.030769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.098 [2024-07-25 14:12:03.031036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:14.098 [2024-07-25 14:12:03.031237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.098 [2024-07-25 14:12:03.034167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.098 [2024-07-25 14:12:03.034354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:14.098 BaseBdev4 00:29:14.098 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:29:14.355 [2024-07-25 14:12:03.298848] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:14.356 [2024-07-25 14:12:03.301244] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:14.356 [2024-07-25 14:12:03.301485] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:14.356 [2024-07-25 14:12:03.301692] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:14.356 [2024-07-25 14:12:03.302146] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:29:14.356 [2024-07-25 14:12:03.302276] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:14.356 [2024-07-25 14:12:03.302449] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:14.356 [2024-07-25 14:12:03.303005] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:29:14.356 [2024-07-25 14:12:03.303134] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:29:14.356 [2024-07-25 14:12:03.303482] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.356 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.664 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.664 "name": "raid_bdev1", 00:29:14.664 "uuid": "af5b6107-f3e7-4ac1-a33b-afec2b36ff2e", 00:29:14.664 "strip_size_kb": 0, 00:29:14.664 "state": "online", 00:29:14.664 "raid_level": "raid1", 00:29:14.664 "superblock": true, 00:29:14.664 "num_base_bdevs": 4, 00:29:14.664 "num_base_bdevs_discovered": 4, 00:29:14.664 "num_base_bdevs_operational": 4, 00:29:14.664 "base_bdevs_list": [ 00:29:14.664 { 00:29:14.664 "name": "BaseBdev1", 00:29:14.664 "uuid": "e7a81027-3ebc-599d-9efa-f71561f7538f", 00:29:14.664 "is_configured": true, 00:29:14.664 "data_offset": 2048, 00:29:14.664 "data_size": 63488 00:29:14.664 }, 00:29:14.664 { 00:29:14.664 "name": "BaseBdev2", 00:29:14.664 "uuid": "bb6d7c39-0f30-5050-ad40-15d439470df5", 00:29:14.664 "is_configured": true, 00:29:14.664 "data_offset": 2048, 00:29:14.664 "data_size": 63488 00:29:14.664 }, 00:29:14.664 { 00:29:14.664 "name": "BaseBdev3", 00:29:14.664 "uuid": "d4c0b9fe-e176-5acf-8c2e-7871ee180255", 00:29:14.664 "is_configured": true, 00:29:14.664 "data_offset": 2048, 00:29:14.664 "data_size": 63488 00:29:14.664 }, 00:29:14.664 { 00:29:14.664 "name": "BaseBdev4", 00:29:14.664 "uuid": "98ced330-138a-5933-8574-c3b746659378", 00:29:14.664 "is_configured": true, 00:29:14.664 "data_offset": 2048, 00:29:14.664 "data_size": 63488 00:29:14.664 } 00:29:14.664 ] 00:29:14.664 }' 00:29:14.664 14:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.664 14:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.596 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:29:15.596 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:15.596 [2024-07-25 14:12:04.417048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:16.533 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@920 -- # [[ read = \w\r\i\t\e ]] 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@923 -- # expected_num_base_bdevs=4 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.801 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.801 "name": "raid_bdev1", 00:29:16.801 "uuid": "af5b6107-f3e7-4ac1-a33b-afec2b36ff2e", 00:29:16.801 "strip_size_kb": 0, 00:29:16.801 "state": "online", 00:29:16.801 "raid_level": "raid1", 00:29:16.801 "superblock": true, 00:29:16.801 "num_base_bdevs": 4, 00:29:16.801 "num_base_bdevs_discovered": 4, 00:29:16.801 "num_base_bdevs_operational": 4, 00:29:16.801 "base_bdevs_list": [ 00:29:16.801 { 00:29:16.801 "name": "BaseBdev1", 00:29:16.801 "uuid": "e7a81027-3ebc-599d-9efa-f71561f7538f", 00:29:16.801 "is_configured": true, 00:29:16.802 "data_offset": 2048, 00:29:16.802 "data_size": 63488 00:29:16.802 }, 00:29:16.802 { 00:29:16.802 "name": "BaseBdev2", 00:29:16.802 "uuid": "bb6d7c39-0f30-5050-ad40-15d439470df5", 00:29:16.802 "is_configured": true, 00:29:16.802 "data_offset": 2048, 00:29:16.802 "data_size": 63488 00:29:16.802 }, 00:29:16.802 { 00:29:16.802 "name": "BaseBdev3", 00:29:16.802 "uuid": "d4c0b9fe-e176-5acf-8c2e-7871ee180255", 00:29:16.802 "is_configured": true, 00:29:16.802 "data_offset": 2048, 00:29:16.802 "data_size": 63488 00:29:16.802 }, 00:29:16.802 { 00:29:16.802 "name": "BaseBdev4", 00:29:16.802 "uuid": "98ced330-138a-5933-8574-c3b746659378", 00:29:16.802 "is_configured": true, 00:29:16.802 "data_offset": 2048, 00:29:16.802 "data_size": 63488 00:29:16.802 } 00:29:16.802 ] 00:29:16.802 }' 00:29:16.802 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.802 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.734 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:17.992 [2024-07-25 14:12:06.787293] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.992 [2024-07-25 14:12:06.787339] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:17.992 [2024-07-25 14:12:06.790423] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:17.992 [2024-07-25 14:12:06.790504] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.992 [2024-07-25 14:12:06.790658] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:17.992 [2024-07-25 14:12:06.790669] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:29:17.992 0 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 143862 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 143862 ']' 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 143862 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143862 00:29:17.992 killing process with pid 143862 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143862' 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 143862 00:29:17.992 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 143862 00:29:17.992 [2024-07-25 14:12:06.830044] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:18.249 [2024-07-25 14:12:07.088273] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.pMMMS3wN99 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:29:19.183 ************************************ 00:29:19.183 END TEST raid_read_error_test 00:29:19.183 ************************************ 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:19.183 00:29:19.183 real 0m9.795s 00:29:19.183 user 0m15.627s 00:29:19.183 sys 0m0.978s 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.183 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.441 14:12:08 bdev_raid -- bdev/bdev_raid.sh@1025 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:29:19.442 14:12:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:19.442 14:12:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.442 14:12:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.442 ************************************ 00:29:19.442 START TEST raid_write_error_test 00:29:19.442 ************************************ 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@878 -- # local raid_level=raid1 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@879 -- # local num_base_bdevs=4 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@880 -- # local error_io_type=write 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i = 1 )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev1 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev2 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev3 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # echo BaseBdev4 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i++ )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # (( i <= num_base_bdevs )) 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@881 -- # local base_bdevs 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@882 -- # local raid_bdev_name=raid_bdev1 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@883 -- # local strip_size 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@884 -- # local create_arg 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@885 -- # local bdevperf_log 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@886 -- # local fail_per_s 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@888 -- # '[' raid1 '!=' raid1 ']' 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@892 -- # strip_size=0 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # mktemp -p /raidtest 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@895 -- # bdevperf_log=/raidtest/tmp.wnXNf3BQT4 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@898 -- # raid_pid=144089 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@899 -- # waitforlisten 144089 /var/tmp/spdk-raid.sock 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 144089 ']' 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@897 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:19.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.442 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.442 [2024-07-25 14:12:08.353769] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:29:19.442 [2024-07-25 14:12:08.354026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144089 ] 00:29:19.700 [2024-07-25 14:12:08.524827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.700 [2024-07-25 14:12:08.732244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.959 [2024-07-25 14:12:08.915709] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.562 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.562 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:29:20.562 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:20.562 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:20.836 BaseBdev1_malloc 00:29:20.836 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:29:21.094 true 00:29:21.094 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:21.094 [2024-07-25 14:12:10.121188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:21.094 [2024-07-25 14:12:10.121355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.094 [2024-07-25 14:12:10.121402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:21.094 [2024-07-25 14:12:10.121425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.094 [2024-07-25 14:12:10.123997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.094 [2024-07-25 14:12:10.124068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:21.094 BaseBdev1 00:29:21.352 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:21.352 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:21.610 BaseBdev2_malloc 00:29:21.610 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:29:21.868 true 00:29:21.868 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:22.125 [2024-07-25 14:12:10.938513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:22.125 [2024-07-25 14:12:10.938661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.125 [2024-07-25 14:12:10.938707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:22.125 [2024-07-25 14:12:10.938728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.125 [2024-07-25 14:12:10.941578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.125 [2024-07-25 14:12:10.941651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:22.125 BaseBdev2 00:29:22.125 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:22.125 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:22.382 BaseBdev3_malloc 00:29:22.382 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:29:22.639 true 00:29:22.639 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:22.896 [2024-07-25 14:12:11.683406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:22.896 [2024-07-25 14:12:11.683556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.896 [2024-07-25 14:12:11.683599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:22.896 [2024-07-25 14:12:11.683627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.896 [2024-07-25 14:12:11.686380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.896 [2024-07-25 14:12:11.686442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:22.896 BaseBdev3 00:29:22.896 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@902 -- # for bdev in "${base_bdevs[@]}" 00:29:22.896 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:23.154 BaseBdev4_malloc 00:29:23.154 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:29:23.412 true 00:29:23.412 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@905 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:23.670 [2024-07-25 14:12:12.466908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:23.670 [2024-07-25 14:12:12.467056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.670 [2024-07-25 14:12:12.467119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:23.670 [2024-07-25 14:12:12.467150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.670 [2024-07-25 14:12:12.469737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.670 [2024-07-25 14:12:12.469851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:23.670 BaseBdev4 00:29:23.670 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:29:23.670 [2024-07-25 14:12:12.707012] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:23.670 [2024-07-25 14:12:12.709224] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.670 [2024-07-25 14:12:12.709351] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:23.670 [2024-07-25 14:12:12.709429] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:23.670 [2024-07-25 14:12:12.709699] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:29:23.670 [2024-07-25 14:12:12.709722] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:23.670 [2024-07-25 14:12:12.709898] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:23.670 [2024-07-25 14:12:12.710344] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:29:23.670 [2024-07-25 14:12:12.710370] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:29:23.670 [2024-07-25 14:12:12.710558] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@910 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.928 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.186 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:24.186 "name": "raid_bdev1", 00:29:24.186 "uuid": "3c2b0021-085c-4c36-b2eb-9ee83a53d33d", 00:29:24.186 "strip_size_kb": 0, 00:29:24.186 "state": "online", 00:29:24.186 "raid_level": "raid1", 00:29:24.186 "superblock": true, 00:29:24.186 "num_base_bdevs": 4, 00:29:24.186 "num_base_bdevs_discovered": 4, 00:29:24.186 "num_base_bdevs_operational": 4, 00:29:24.186 "base_bdevs_list": [ 00:29:24.186 { 00:29:24.186 "name": "BaseBdev1", 00:29:24.186 "uuid": "e4df57aa-8839-56ab-97d5-d03969ce757b", 00:29:24.186 "is_configured": true, 00:29:24.186 "data_offset": 2048, 00:29:24.186 "data_size": 63488 00:29:24.186 }, 00:29:24.186 { 00:29:24.186 "name": "BaseBdev2", 00:29:24.186 "uuid": "ebc07f7e-75b6-5f74-b95e-44f94c3694f9", 00:29:24.186 "is_configured": true, 00:29:24.186 "data_offset": 2048, 00:29:24.186 "data_size": 63488 00:29:24.186 }, 00:29:24.186 { 00:29:24.186 "name": "BaseBdev3", 00:29:24.186 "uuid": "d5d283f9-6603-59b6-b50d-a20fc69561bd", 00:29:24.186 "is_configured": true, 00:29:24.186 "data_offset": 2048, 00:29:24.186 "data_size": 63488 00:29:24.186 }, 00:29:24.186 { 00:29:24.186 "name": "BaseBdev4", 00:29:24.186 "uuid": "5737653c-45c4-572d-a858-e34bb3d2c87d", 00:29:24.186 "is_configured": true, 00:29:24.186 "data_offset": 2048, 00:29:24.186 "data_size": 63488 00:29:24.186 } 00:29:24.186 ] 00:29:24.186 }' 00:29:24.186 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:24.186 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.757 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@914 -- # sleep 1 00:29:24.757 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:24.757 [2024-07-25 14:12:13.676831] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:25.687 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@917 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:25.945 [2024-07-25 14:12:14.853093] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:29:25.945 [2024-07-25 14:12:14.853238] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:25.945 [2024-07-25 14:12:14.853524] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@919 -- # local expected_num_base_bdevs 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@920 -- # [[ write = \w\r\i\t\e ]] 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@921 -- # expected_num_base_bdevs=3 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@925 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.945 14:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.202 14:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:26.202 "name": "raid_bdev1", 00:29:26.202 "uuid": "3c2b0021-085c-4c36-b2eb-9ee83a53d33d", 00:29:26.202 "strip_size_kb": 0, 00:29:26.202 "state": "online", 00:29:26.202 "raid_level": "raid1", 00:29:26.202 "superblock": true, 00:29:26.202 "num_base_bdevs": 4, 00:29:26.202 "num_base_bdevs_discovered": 3, 00:29:26.202 "num_base_bdevs_operational": 3, 00:29:26.203 "base_bdevs_list": [ 00:29:26.203 { 00:29:26.203 "name": null, 00:29:26.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.203 "is_configured": false, 00:29:26.203 "data_offset": 2048, 00:29:26.203 "data_size": 63488 00:29:26.203 }, 00:29:26.203 { 00:29:26.203 "name": "BaseBdev2", 00:29:26.203 "uuid": "ebc07f7e-75b6-5f74-b95e-44f94c3694f9", 00:29:26.203 "is_configured": true, 00:29:26.203 "data_offset": 2048, 00:29:26.203 "data_size": 63488 00:29:26.203 }, 00:29:26.203 { 00:29:26.203 "name": "BaseBdev3", 00:29:26.203 "uuid": "d5d283f9-6603-59b6-b50d-a20fc69561bd", 00:29:26.203 "is_configured": true, 00:29:26.203 "data_offset": 2048, 00:29:26.203 "data_size": 63488 00:29:26.203 }, 00:29:26.203 { 00:29:26.203 "name": "BaseBdev4", 00:29:26.203 "uuid": "5737653c-45c4-572d-a858-e34bb3d2c87d", 00:29:26.203 "is_configured": true, 00:29:26.203 "data_offset": 2048, 00:29:26.203 "data_size": 63488 00:29:26.203 } 00:29:26.203 ] 00:29:26.203 }' 00:29:26.203 14:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:26.203 14:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.135 14:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@927 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:27.135 [2024-07-25 14:12:16.077793] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:27.135 [2024-07-25 14:12:16.077865] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:27.135 [2024-07-25 14:12:16.080979] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:27.135 [2024-07-25 14:12:16.081037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.135 [2024-07-25 14:12:16.081150] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:27.135 [2024-07-25 14:12:16.081163] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:29:27.135 0 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@929 -- # killprocess 144089 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 144089 ']' 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 144089 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144089 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144089' 00:29:27.135 killing process with pid 144089 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 144089 00:29:27.135 [2024-07-25 14:12:16.116789] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:27.135 14:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 144089 00:29:27.393 [2024-07-25 14:12:16.384902] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep -v Job /raidtest/tmp.wnXNf3BQT4 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # grep raid_bdev1 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # awk '{print $6}' 00:29:28.766 ************************************ 00:29:28.766 END TEST raid_write_error_test 00:29:28.766 ************************************ 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@933 -- # fail_per_s=0.00 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@934 -- # has_redundancy raid1 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@935 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:28.766 00:29:28.766 real 0m9.290s 00:29:28.766 user 0m14.470s 00:29:28.766 sys 0m1.049s 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.766 14:12:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.766 14:12:17 bdev_raid -- bdev/bdev_raid.sh@1029 -- # '[' true = true ']' 00:29:28.766 14:12:17 bdev_raid -- bdev/bdev_raid.sh@1030 -- # for n in 2 4 00:29:28.766 14:12:17 bdev_raid -- bdev/bdev_raid.sh@1031 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:29:28.766 14:12:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:28.766 14:12:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.766 14:12:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:28.766 ************************************ 00:29:28.766 START TEST raid_rebuild_test 00:29:28.766 ************************************ 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:28.766 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=144304 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 144304 /var/tmp/spdk-raid.sock 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 144304 ']' 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:28.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:28.767 14:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.767 [2024-07-25 14:12:17.692738] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:29:28.767 [2024-07-25 14:12:17.693567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144304 ] 00:29:28.767 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:28.767 Zero copy mechanism will not be used. 00:29:29.024 [2024-07-25 14:12:17.859656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.024 [2024-07-25 14:12:18.063445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.281 [2024-07-25 14:12:18.244390] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:29.847 14:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:29.847 14:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:29.847 14:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:29.847 14:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:29.847 BaseBdev1_malloc 00:29:29.847 14:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:30.105 [2024-07-25 14:12:19.077353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:30.105 [2024-07-25 14:12:19.077708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.105 [2024-07-25 14:12:19.077903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:30.105 [2024-07-25 14:12:19.078091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.105 [2024-07-25 14:12:19.080911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.105 [2024-07-25 14:12:19.081094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:30.105 BaseBdev1 00:29:30.105 14:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:30.105 14:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:30.362 BaseBdev2_malloc 00:29:30.620 14:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:30.620 [2024-07-25 14:12:19.632380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:30.620 [2024-07-25 14:12:19.632752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.620 [2024-07-25 14:12:19.632974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:30.620 [2024-07-25 14:12:19.633149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.620 [2024-07-25 14:12:19.635660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.620 [2024-07-25 14:12:19.635851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:30.620 BaseBdev2 00:29:30.620 14:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:31.185 spare_malloc 00:29:31.185 14:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:31.185 spare_delay 00:29:31.442 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:31.442 [2024-07-25 14:12:20.444605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:31.442 [2024-07-25 14:12:20.444974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:31.442 [2024-07-25 14:12:20.445181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:31.442 [2024-07-25 14:12:20.445314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:31.442 [2024-07-25 14:12:20.447862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:31.442 [2024-07-25 14:12:20.448051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:31.442 spare 00:29:31.442 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:32.007 [2024-07-25 14:12:20.756952] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:32.007 [2024-07-25 14:12:20.759452] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:32.008 [2024-07-25 14:12:20.759724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:29:32.008 [2024-07-25 14:12:20.759853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:32.008 [2024-07-25 14:12:20.760054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:32.008 [2024-07-25 14:12:20.760600] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:29:32.008 [2024-07-25 14:12:20.760729] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:29:32.008 [2024-07-25 14:12:20.761069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.008 14:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.008 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:32.008 "name": "raid_bdev1", 00:29:32.008 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:32.008 "strip_size_kb": 0, 00:29:32.008 "state": "online", 00:29:32.008 "raid_level": "raid1", 00:29:32.008 "superblock": false, 00:29:32.008 "num_base_bdevs": 2, 00:29:32.008 "num_base_bdevs_discovered": 2, 00:29:32.008 "num_base_bdevs_operational": 2, 00:29:32.008 "base_bdevs_list": [ 00:29:32.008 { 00:29:32.008 "name": "BaseBdev1", 00:29:32.008 "uuid": "4c362e57-dcaa-5b34-a4dd-61831427b36e", 00:29:32.008 "is_configured": true, 00:29:32.008 "data_offset": 0, 00:29:32.008 "data_size": 65536 00:29:32.008 }, 00:29:32.008 { 00:29:32.008 "name": "BaseBdev2", 00:29:32.008 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:32.008 "is_configured": true, 00:29:32.008 "data_offset": 0, 00:29:32.008 "data_size": 65536 00:29:32.008 } 00:29:32.008 ] 00:29:32.008 }' 00:29:32.008 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:32.008 14:12:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.940 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:32.940 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:32.940 [2024-07-25 14:12:21.933585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:32.940 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:29:32.940 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.940 14:12:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.198 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:33.457 [2024-07-25 14:12:22.401563] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:33.457 /dev/nbd0 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:33.457 1+0 records in 00:29:33.457 1+0 records out 00:29:33.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326066 s, 12.6 MB/s 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:33.457 14:12:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:40.102 65536+0 records in 00:29:40.102 65536+0 records out 00:29:40.102 33554432 bytes (34 MB, 32 MiB) copied, 5.47073 s, 6.1 MB/s 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:40.102 14:12:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:40.102 [2024-07-25 14:12:28.206315] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:40.102 [2024-07-25 14:12:28.490104] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:40.102 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.103 "name": "raid_bdev1", 00:29:40.103 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:40.103 "strip_size_kb": 0, 00:29:40.103 "state": "online", 00:29:40.103 "raid_level": "raid1", 00:29:40.103 "superblock": false, 00:29:40.103 "num_base_bdevs": 2, 00:29:40.103 "num_base_bdevs_discovered": 1, 00:29:40.103 "num_base_bdevs_operational": 1, 00:29:40.103 "base_bdevs_list": [ 00:29:40.103 { 00:29:40.103 "name": null, 00:29:40.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.103 "is_configured": false, 00:29:40.103 "data_offset": 0, 00:29:40.103 "data_size": 65536 00:29:40.103 }, 00:29:40.103 { 00:29:40.103 "name": "BaseBdev2", 00:29:40.103 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:40.103 "is_configured": true, 00:29:40.103 "data_offset": 0, 00:29:40.103 "data_size": 65536 00:29:40.103 } 00:29:40.103 ] 00:29:40.103 }' 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.103 14:12:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.667 14:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:40.925 [2024-07-25 14:12:29.798654] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.925 [2024-07-25 14:12:29.814048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09960 00:29:40.925 [2024-07-25 14:12:29.816207] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:40.925 14:12:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:41.884 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.885 14:12:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.143 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.143 "name": "raid_bdev1", 00:29:42.143 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:42.143 "strip_size_kb": 0, 00:29:42.143 "state": "online", 00:29:42.143 "raid_level": "raid1", 00:29:42.143 "superblock": false, 00:29:42.143 "num_base_bdevs": 2, 00:29:42.143 "num_base_bdevs_discovered": 2, 00:29:42.143 "num_base_bdevs_operational": 2, 00:29:42.143 "process": { 00:29:42.143 "type": "rebuild", 00:29:42.143 "target": "spare", 00:29:42.143 "progress": { 00:29:42.143 "blocks": 26624, 00:29:42.143 "percent": 40 00:29:42.143 } 00:29:42.143 }, 00:29:42.143 "base_bdevs_list": [ 00:29:42.143 { 00:29:42.143 "name": "spare", 00:29:42.143 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:42.143 "is_configured": true, 00:29:42.143 "data_offset": 0, 00:29:42.143 "data_size": 65536 00:29:42.143 }, 00:29:42.143 { 00:29:42.143 "name": "BaseBdev2", 00:29:42.143 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:42.143 "is_configured": true, 00:29:42.143 "data_offset": 0, 00:29:42.143 "data_size": 65536 00:29:42.143 } 00:29:42.143 ] 00:29:42.143 }' 00:29:42.143 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.401 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.401 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.401 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.401 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:42.659 [2024-07-25 14:12:31.545993] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.659 [2024-07-25 14:12:31.629314] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:42.659 [2024-07-25 14:12:31.629471] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.659 [2024-07-25 14:12:31.629508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:42.659 [2024-07-25 14:12:31.629517] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.659 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.225 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:43.225 "name": "raid_bdev1", 00:29:43.225 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:43.225 "strip_size_kb": 0, 00:29:43.225 "state": "online", 00:29:43.225 "raid_level": "raid1", 00:29:43.225 "superblock": false, 00:29:43.225 "num_base_bdevs": 2, 00:29:43.225 "num_base_bdevs_discovered": 1, 00:29:43.225 "num_base_bdevs_operational": 1, 00:29:43.225 "base_bdevs_list": [ 00:29:43.225 { 00:29:43.225 "name": null, 00:29:43.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.225 "is_configured": false, 00:29:43.225 "data_offset": 0, 00:29:43.225 "data_size": 65536 00:29:43.225 }, 00:29:43.225 { 00:29:43.225 "name": "BaseBdev2", 00:29:43.225 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:43.225 "is_configured": true, 00:29:43.225 "data_offset": 0, 00:29:43.225 "data_size": 65536 00:29:43.225 } 00:29:43.225 ] 00:29:43.225 }' 00:29:43.225 14:12:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:43.225 14:12:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.791 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.049 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.049 "name": "raid_bdev1", 00:29:44.049 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:44.049 "strip_size_kb": 0, 00:29:44.049 "state": "online", 00:29:44.049 "raid_level": "raid1", 00:29:44.049 "superblock": false, 00:29:44.049 "num_base_bdevs": 2, 00:29:44.049 "num_base_bdevs_discovered": 1, 00:29:44.049 "num_base_bdevs_operational": 1, 00:29:44.049 "base_bdevs_list": [ 00:29:44.049 { 00:29:44.049 "name": null, 00:29:44.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.049 "is_configured": false, 00:29:44.049 "data_offset": 0, 00:29:44.049 "data_size": 65536 00:29:44.049 }, 00:29:44.049 { 00:29:44.049 "name": "BaseBdev2", 00:29:44.049 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:44.049 "is_configured": true, 00:29:44.049 "data_offset": 0, 00:29:44.049 "data_size": 65536 00:29:44.049 } 00:29:44.049 ] 00:29:44.049 }' 00:29:44.049 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.049 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:44.049 14:12:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.049 14:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:44.049 14:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:44.307 [2024-07-25 14:12:33.296484] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:44.307 [2024-07-25 14:12:33.311422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:29:44.307 [2024-07-25 14:12:33.313882] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:44.307 14:12:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.684 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.684 "name": "raid_bdev1", 00:29:45.684 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:45.684 "strip_size_kb": 0, 00:29:45.684 "state": "online", 00:29:45.684 "raid_level": "raid1", 00:29:45.684 "superblock": false, 00:29:45.684 "num_base_bdevs": 2, 00:29:45.684 "num_base_bdevs_discovered": 2, 00:29:45.684 "num_base_bdevs_operational": 2, 00:29:45.684 "process": { 00:29:45.684 "type": "rebuild", 00:29:45.684 "target": "spare", 00:29:45.684 "progress": { 00:29:45.684 "blocks": 26624, 00:29:45.684 "percent": 40 00:29:45.684 } 00:29:45.684 }, 00:29:45.684 "base_bdevs_list": [ 00:29:45.684 { 00:29:45.685 "name": "spare", 00:29:45.685 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:45.685 "is_configured": true, 00:29:45.685 "data_offset": 0, 00:29:45.685 "data_size": 65536 00:29:45.685 }, 00:29:45.685 { 00:29:45.685 "name": "BaseBdev2", 00:29:45.685 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:45.685 "is_configured": true, 00:29:45.685 "data_offset": 0, 00:29:45.685 "data_size": 65536 00:29:45.685 } 00:29:45.685 ] 00:29:45.685 }' 00:29:45.685 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.685 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:45.685 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=920 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.943 14:12:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.201 "name": "raid_bdev1", 00:29:46.201 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:46.201 "strip_size_kb": 0, 00:29:46.201 "state": "online", 00:29:46.201 "raid_level": "raid1", 00:29:46.201 "superblock": false, 00:29:46.201 "num_base_bdevs": 2, 00:29:46.201 "num_base_bdevs_discovered": 2, 00:29:46.201 "num_base_bdevs_operational": 2, 00:29:46.201 "process": { 00:29:46.201 "type": "rebuild", 00:29:46.201 "target": "spare", 00:29:46.201 "progress": { 00:29:46.201 "blocks": 34816, 00:29:46.201 "percent": 53 00:29:46.201 } 00:29:46.201 }, 00:29:46.201 "base_bdevs_list": [ 00:29:46.201 { 00:29:46.201 "name": "spare", 00:29:46.201 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:46.201 "is_configured": true, 00:29:46.201 "data_offset": 0, 00:29:46.201 "data_size": 65536 00:29:46.201 }, 00:29:46.201 { 00:29:46.201 "name": "BaseBdev2", 00:29:46.201 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:46.201 "is_configured": true, 00:29:46.201 "data_offset": 0, 00:29:46.201 "data_size": 65536 00:29:46.201 } 00:29:46.201 ] 00:29:46.201 }' 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.201 14:12:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.577 "name": "raid_bdev1", 00:29:47.577 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:47.577 "strip_size_kb": 0, 00:29:47.577 "state": "online", 00:29:47.577 "raid_level": "raid1", 00:29:47.577 "superblock": false, 00:29:47.577 "num_base_bdevs": 2, 00:29:47.577 "num_base_bdevs_discovered": 2, 00:29:47.577 "num_base_bdevs_operational": 2, 00:29:47.577 "process": { 00:29:47.577 "type": "rebuild", 00:29:47.577 "target": "spare", 00:29:47.577 "progress": { 00:29:47.577 "blocks": 63488, 00:29:47.577 "percent": 96 00:29:47.577 } 00:29:47.577 }, 00:29:47.577 "base_bdevs_list": [ 00:29:47.577 { 00:29:47.577 "name": "spare", 00:29:47.577 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:47.577 "is_configured": true, 00:29:47.577 "data_offset": 0, 00:29:47.577 "data_size": 65536 00:29:47.577 }, 00:29:47.577 { 00:29:47.577 "name": "BaseBdev2", 00:29:47.577 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:47.577 "is_configured": true, 00:29:47.577 "data_offset": 0, 00:29:47.577 "data_size": 65536 00:29:47.577 } 00:29:47.577 ] 00:29:47.577 }' 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.577 [2024-07-25 14:12:36.535911] bdev_raid.c:2894:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:47.577 [2024-07-25 14:12:36.536223] bdev_raid.c:2556:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:47.577 [2024-07-25 14:12:36.536417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:47.577 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.834 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:47.834 14:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.798 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.056 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.056 "name": "raid_bdev1", 00:29:49.056 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:49.056 "strip_size_kb": 0, 00:29:49.056 "state": "online", 00:29:49.056 "raid_level": "raid1", 00:29:49.056 "superblock": false, 00:29:49.056 "num_base_bdevs": 2, 00:29:49.056 "num_base_bdevs_discovered": 2, 00:29:49.056 "num_base_bdevs_operational": 2, 00:29:49.056 "base_bdevs_list": [ 00:29:49.056 { 00:29:49.056 "name": "spare", 00:29:49.056 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:49.056 "is_configured": true, 00:29:49.056 "data_offset": 0, 00:29:49.056 "data_size": 65536 00:29:49.056 }, 00:29:49.056 { 00:29:49.056 "name": "BaseBdev2", 00:29:49.056 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:49.056 "is_configured": true, 00:29:49.056 "data_offset": 0, 00:29:49.056 "data_size": 65536 00:29:49.056 } 00:29:49.056 ] 00:29:49.056 }' 00:29:49.056 14:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.056 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.623 "name": "raid_bdev1", 00:29:49.623 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:49.623 "strip_size_kb": 0, 00:29:49.623 "state": "online", 00:29:49.623 "raid_level": "raid1", 00:29:49.623 "superblock": false, 00:29:49.623 "num_base_bdevs": 2, 00:29:49.623 "num_base_bdevs_discovered": 2, 00:29:49.623 "num_base_bdevs_operational": 2, 00:29:49.623 "base_bdevs_list": [ 00:29:49.623 { 00:29:49.623 "name": "spare", 00:29:49.623 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:49.623 "is_configured": true, 00:29:49.623 "data_offset": 0, 00:29:49.623 "data_size": 65536 00:29:49.623 }, 00:29:49.623 { 00:29:49.623 "name": "BaseBdev2", 00:29:49.623 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:49.623 "is_configured": true, 00:29:49.623 "data_offset": 0, 00:29:49.623 "data_size": 65536 00:29:49.623 } 00:29:49.623 ] 00:29:49.623 }' 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.623 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.882 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:49.882 "name": "raid_bdev1", 00:29:49.882 "uuid": "dcc38fb8-dba1-41b6-916c-57fd1fd45d25", 00:29:49.882 "strip_size_kb": 0, 00:29:49.882 "state": "online", 00:29:49.882 "raid_level": "raid1", 00:29:49.882 "superblock": false, 00:29:49.882 "num_base_bdevs": 2, 00:29:49.882 "num_base_bdevs_discovered": 2, 00:29:49.882 "num_base_bdevs_operational": 2, 00:29:49.882 "base_bdevs_list": [ 00:29:49.882 { 00:29:49.882 "name": "spare", 00:29:49.882 "uuid": "8366fb4c-3a5f-534b-b86a-2d763630f410", 00:29:49.882 "is_configured": true, 00:29:49.882 "data_offset": 0, 00:29:49.882 "data_size": 65536 00:29:49.882 }, 00:29:49.882 { 00:29:49.882 "name": "BaseBdev2", 00:29:49.882 "uuid": "cb471e97-227a-5eb8-bd47-67abce5fe19d", 00:29:49.882 "is_configured": true, 00:29:49.882 "data_offset": 0, 00:29:49.882 "data_size": 65536 00:29:49.882 } 00:29:49.882 ] 00:29:49.882 }' 00:29:49.882 14:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:49.882 14:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.816 14:12:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:50.816 [2024-07-25 14:12:39.741127] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:50.816 [2024-07-25 14:12:39.741379] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:50.816 [2024-07-25 14:12:39.741591] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:50.816 [2024-07-25 14:12:39.741813] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:50.816 [2024-07-25 14:12:39.741942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:29:50.816 14:12:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.816 14:12:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.074 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:51.640 /dev/nbd0 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.640 1+0 records in 00:29:51.640 1+0 records out 00:29:51.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610218 s, 6.7 MB/s 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.640 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:51.898 /dev/nbd1 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.898 1+0 records in 00:29:51.898 1+0 records out 00:29:51.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752048 s, 5.4 MB/s 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:51.898 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.899 14:12:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.464 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:52.722 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:52.722 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 144304 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 144304 ']' 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 144304 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144304 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144304' 00:29:52.723 killing process with pid 144304 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 144304 00:29:52.723 Received shutdown signal, test time was about 60.000000 seconds 00:29:52.723 00:29:52.723 Latency(us) 00:29:52.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.723 =================================================================================================================== 00:29:52.723 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:52.723 14:12:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 144304 00:29:52.723 [2024-07-25 14:12:41.626663] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:52.981 [2024-07-25 14:12:41.895611] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:54.355 ************************************ 00:29:54.355 END TEST raid_rebuild_test 00:29:54.355 ************************************ 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:54.355 00:29:54.355 real 0m25.482s 00:29:54.355 user 0m35.683s 00:29:54.355 sys 0m3.987s 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.355 14:12:43 bdev_raid -- bdev/bdev_raid.sh@1032 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:29:54.355 14:12:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:54.355 14:12:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:54.355 14:12:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:54.355 ************************************ 00:29:54.355 START TEST raid_rebuild_test_sb 00:29:54.355 ************************************ 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=144872 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 144872 /var/tmp/spdk-raid.sock 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:54.355 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 144872 ']' 00:29:54.356 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:54.356 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.356 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:54.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:54.356 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.356 14:12:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:54.356 [2024-07-25 14:12:43.256324] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:29:54.356 [2024-07-25 14:12:43.256771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144872 ] 00:29:54.356 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:54.356 Zero copy mechanism will not be used. 00:29:54.613 [2024-07-25 14:12:43.428483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.870 [2024-07-25 14:12:43.685826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.870 [2024-07-25 14:12:43.896582] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:55.434 14:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.434 14:12:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:55.434 14:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:55.434 14:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:55.692 BaseBdev1_malloc 00:29:55.692 14:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:55.950 [2024-07-25 14:12:44.912887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:55.950 [2024-07-25 14:12:44.913213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.950 [2024-07-25 14:12:44.913412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:55.950 [2024-07-25 14:12:44.913561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.950 [2024-07-25 14:12:44.916445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.950 [2024-07-25 14:12:44.916619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:55.950 BaseBdev1 00:29:55.950 14:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:55.950 14:12:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:56.208 BaseBdev2_malloc 00:29:56.208 14:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:56.774 [2024-07-25 14:12:45.547085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:56.774 [2024-07-25 14:12:45.547487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:56.774 [2024-07-25 14:12:45.547659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:56.774 [2024-07-25 14:12:45.547799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:56.774 [2024-07-25 14:12:45.550451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:56.774 [2024-07-25 14:12:45.550624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:56.774 BaseBdev2 00:29:56.774 14:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:57.031 spare_malloc 00:29:57.031 14:12:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:57.289 spare_delay 00:29:57.289 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:57.546 [2024-07-25 14:12:46.506059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:57.546 [2024-07-25 14:12:46.506344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.546 [2024-07-25 14:12:46.506522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:57.546 [2024-07-25 14:12:46.506656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.546 [2024-07-25 14:12:46.509095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.546 [2024-07-25 14:12:46.509272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:57.546 spare 00:29:57.546 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:29:57.804 [2024-07-25 14:12:46.790289] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:57.804 [2024-07-25 14:12:46.792581] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:57.804 [2024-07-25 14:12:46.792952] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:29:57.804 [2024-07-25 14:12:46.793154] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:57.804 [2024-07-25 14:12:46.793359] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:57.804 [2024-07-25 14:12:46.793918] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:29:57.804 [2024-07-25 14:12:46.794059] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:29:57.804 [2024-07-25 14:12:46.794399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.804 14:12:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.369 14:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.369 "name": "raid_bdev1", 00:29:58.369 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:29:58.369 "strip_size_kb": 0, 00:29:58.369 "state": "online", 00:29:58.369 "raid_level": "raid1", 00:29:58.369 "superblock": true, 00:29:58.369 "num_base_bdevs": 2, 00:29:58.369 "num_base_bdevs_discovered": 2, 00:29:58.369 "num_base_bdevs_operational": 2, 00:29:58.369 "base_bdevs_list": [ 00:29:58.369 { 00:29:58.369 "name": "BaseBdev1", 00:29:58.369 "uuid": "a2cc9772-7155-5922-a8e5-8e75314d0ce9", 00:29:58.369 "is_configured": true, 00:29:58.369 "data_offset": 2048, 00:29:58.369 "data_size": 63488 00:29:58.369 }, 00:29:58.369 { 00:29:58.369 "name": "BaseBdev2", 00:29:58.369 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:29:58.369 "is_configured": true, 00:29:58.369 "data_offset": 2048, 00:29:58.369 "data_size": 63488 00:29:58.369 } 00:29:58.369 ] 00:29:58.369 }' 00:29:58.369 14:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.369 14:12:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.935 14:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:58.935 14:12:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:59.193 [2024-07-25 14:12:48.082968] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:59.193 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:29:59.193 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.193 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:59.451 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:59.709 [2024-07-25 14:12:48.606911] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:59.709 /dev/nbd0 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.709 1+0 records in 00:29:59.709 1+0 records out 00:29:59.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00090942 s, 4.5 MB/s 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:59.709 14:12:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:06.318 63488+0 records in 00:30:06.318 63488+0 records out 00:30:06.318 32505856 bytes (33 MB, 31 MiB) copied, 5.76308 s, 5.6 MB/s 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:06.318 [2024-07-25 14:12:54.739782] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:06.318 14:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:06.318 [2024-07-25 14:12:55.003429] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:06.318 "name": "raid_bdev1", 00:30:06.318 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:06.318 "strip_size_kb": 0, 00:30:06.318 "state": "online", 00:30:06.318 "raid_level": "raid1", 00:30:06.318 "superblock": true, 00:30:06.318 "num_base_bdevs": 2, 00:30:06.318 "num_base_bdevs_discovered": 1, 00:30:06.318 "num_base_bdevs_operational": 1, 00:30:06.318 "base_bdevs_list": [ 00:30:06.318 { 00:30:06.318 "name": null, 00:30:06.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.318 "is_configured": false, 00:30:06.318 "data_offset": 2048, 00:30:06.318 "data_size": 63488 00:30:06.318 }, 00:30:06.318 { 00:30:06.318 "name": "BaseBdev2", 00:30:06.318 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:06.318 "is_configured": true, 00:30:06.318 "data_offset": 2048, 00:30:06.318 "data_size": 63488 00:30:06.318 } 00:30:06.318 ] 00:30:06.318 }' 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:06.318 14:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.252 14:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:07.252 [2024-07-25 14:12:56.219888] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:07.252 [2024-07-25 14:12:56.235514] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca30f0 00:30:07.252 [2024-07-25 14:12:56.237926] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:07.252 14:12:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.642 "name": "raid_bdev1", 00:30:08.642 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:08.642 "strip_size_kb": 0, 00:30:08.642 "state": "online", 00:30:08.642 "raid_level": "raid1", 00:30:08.642 "superblock": true, 00:30:08.642 "num_base_bdevs": 2, 00:30:08.642 "num_base_bdevs_discovered": 2, 00:30:08.642 "num_base_bdevs_operational": 2, 00:30:08.642 "process": { 00:30:08.642 "type": "rebuild", 00:30:08.642 "target": "spare", 00:30:08.642 "progress": { 00:30:08.642 "blocks": 24576, 00:30:08.642 "percent": 38 00:30:08.642 } 00:30:08.642 }, 00:30:08.642 "base_bdevs_list": [ 00:30:08.642 { 00:30:08.642 "name": "spare", 00:30:08.642 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:08.642 "is_configured": true, 00:30:08.642 "data_offset": 2048, 00:30:08.642 "data_size": 63488 00:30:08.642 }, 00:30:08.642 { 00:30:08.642 "name": "BaseBdev2", 00:30:08.642 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:08.642 "is_configured": true, 00:30:08.642 "data_offset": 2048, 00:30:08.642 "data_size": 63488 00:30:08.642 } 00:30:08.642 ] 00:30:08.642 }' 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.642 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:08.900 [2024-07-25 14:12:57.847959] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:08.900 [2024-07-25 14:12:57.848397] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:08.900 [2024-07-25 14:12:57.848611] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:08.900 [2024-07-25 14:12:57.848748] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:08.900 [2024-07-25 14:12:57.848857] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:08.900 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.901 14:12:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.159 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.159 "name": "raid_bdev1", 00:30:09.159 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:09.159 "strip_size_kb": 0, 00:30:09.159 "state": "online", 00:30:09.159 "raid_level": "raid1", 00:30:09.159 "superblock": true, 00:30:09.159 "num_base_bdevs": 2, 00:30:09.159 "num_base_bdevs_discovered": 1, 00:30:09.159 "num_base_bdevs_operational": 1, 00:30:09.159 "base_bdevs_list": [ 00:30:09.159 { 00:30:09.159 "name": null, 00:30:09.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.159 "is_configured": false, 00:30:09.159 "data_offset": 2048, 00:30:09.159 "data_size": 63488 00:30:09.159 }, 00:30:09.159 { 00:30:09.159 "name": "BaseBdev2", 00:30:09.159 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:09.159 "is_configured": true, 00:30:09.159 "data_offset": 2048, 00:30:09.159 "data_size": 63488 00:30:09.159 } 00:30:09.159 ] 00:30:09.159 }' 00:30:09.159 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.159 14:12:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.093 14:12:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.093 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.093 "name": "raid_bdev1", 00:30:10.093 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:10.093 "strip_size_kb": 0, 00:30:10.093 "state": "online", 00:30:10.093 "raid_level": "raid1", 00:30:10.093 "superblock": true, 00:30:10.093 "num_base_bdevs": 2, 00:30:10.093 "num_base_bdevs_discovered": 1, 00:30:10.093 "num_base_bdevs_operational": 1, 00:30:10.093 "base_bdevs_list": [ 00:30:10.093 { 00:30:10.093 "name": null, 00:30:10.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.093 "is_configured": false, 00:30:10.093 "data_offset": 2048, 00:30:10.093 "data_size": 63488 00:30:10.093 }, 00:30:10.093 { 00:30:10.093 "name": "BaseBdev2", 00:30:10.093 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:10.093 "is_configured": true, 00:30:10.093 "data_offset": 2048, 00:30:10.093 "data_size": 63488 00:30:10.093 } 00:30:10.093 ] 00:30:10.093 }' 00:30:10.093 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.351 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:10.351 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.351 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:10.351 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:10.610 [2024-07-25 14:12:59.483307] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:10.610 [2024-07-25 14:12:59.497856] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:30:10.610 [2024-07-25 14:12:59.500092] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:10.610 14:12:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.545 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.879 "name": "raid_bdev1", 00:30:11.879 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:11.879 "strip_size_kb": 0, 00:30:11.879 "state": "online", 00:30:11.879 "raid_level": "raid1", 00:30:11.879 "superblock": true, 00:30:11.879 "num_base_bdevs": 2, 00:30:11.879 "num_base_bdevs_discovered": 2, 00:30:11.879 "num_base_bdevs_operational": 2, 00:30:11.879 "process": { 00:30:11.879 "type": "rebuild", 00:30:11.879 "target": "spare", 00:30:11.879 "progress": { 00:30:11.879 "blocks": 24576, 00:30:11.879 "percent": 38 00:30:11.879 } 00:30:11.879 }, 00:30:11.879 "base_bdevs_list": [ 00:30:11.879 { 00:30:11.879 "name": "spare", 00:30:11.879 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:11.879 "is_configured": true, 00:30:11.879 "data_offset": 2048, 00:30:11.879 "data_size": 63488 00:30:11.879 }, 00:30:11.879 { 00:30:11.879 "name": "BaseBdev2", 00:30:11.879 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:11.879 "is_configured": true, 00:30:11.879 "data_offset": 2048, 00:30:11.879 "data_size": 63488 00:30:11.879 } 00:30:11.879 ] 00:30:11.879 }' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:11.879 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=946 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.879 14:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.139 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.139 "name": "raid_bdev1", 00:30:12.139 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:12.139 "strip_size_kb": 0, 00:30:12.139 "state": "online", 00:30:12.139 "raid_level": "raid1", 00:30:12.139 "superblock": true, 00:30:12.139 "num_base_bdevs": 2, 00:30:12.139 "num_base_bdevs_discovered": 2, 00:30:12.139 "num_base_bdevs_operational": 2, 00:30:12.139 "process": { 00:30:12.139 "type": "rebuild", 00:30:12.139 "target": "spare", 00:30:12.139 "progress": { 00:30:12.139 "blocks": 32768, 00:30:12.139 "percent": 51 00:30:12.139 } 00:30:12.139 }, 00:30:12.139 "base_bdevs_list": [ 00:30:12.139 { 00:30:12.139 "name": "spare", 00:30:12.139 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:12.139 "is_configured": true, 00:30:12.139 "data_offset": 2048, 00:30:12.140 "data_size": 63488 00:30:12.140 }, 00:30:12.140 { 00:30:12.140 "name": "BaseBdev2", 00:30:12.140 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:12.140 "is_configured": true, 00:30:12.140 "data_offset": 2048, 00:30:12.140 "data_size": 63488 00:30:12.140 } 00:30:12.140 ] 00:30:12.140 }' 00:30:12.140 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.398 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:12.398 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.398 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:12.398 14:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.331 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.588 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.588 "name": "raid_bdev1", 00:30:13.588 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:13.588 "strip_size_kb": 0, 00:30:13.588 "state": "online", 00:30:13.588 "raid_level": "raid1", 00:30:13.588 "superblock": true, 00:30:13.588 "num_base_bdevs": 2, 00:30:13.588 "num_base_bdevs_discovered": 2, 00:30:13.588 "num_base_bdevs_operational": 2, 00:30:13.588 "process": { 00:30:13.588 "type": "rebuild", 00:30:13.588 "target": "spare", 00:30:13.588 "progress": { 00:30:13.588 "blocks": 61440, 00:30:13.589 "percent": 96 00:30:13.589 } 00:30:13.589 }, 00:30:13.589 "base_bdevs_list": [ 00:30:13.589 { 00:30:13.589 "name": "spare", 00:30:13.589 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:13.589 "is_configured": true, 00:30:13.589 "data_offset": 2048, 00:30:13.589 "data_size": 63488 00:30:13.589 }, 00:30:13.589 { 00:30:13.589 "name": "BaseBdev2", 00:30:13.589 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:13.589 "is_configured": true, 00:30:13.589 "data_offset": 2048, 00:30:13.589 "data_size": 63488 00:30:13.589 } 00:30:13.589 ] 00:30:13.589 }' 00:30:13.589 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:13.589 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:13.589 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:13.589 [2024-07-25 14:13:02.620003] bdev_raid.c:2894:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:13.589 [2024-07-25 14:13:02.620319] bdev_raid.c:2556:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:13.589 [2024-07-25 14:13:02.620666] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:13.846 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:13.846 14:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.781 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.039 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.039 "name": "raid_bdev1", 00:30:15.039 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:15.039 "strip_size_kb": 0, 00:30:15.039 "state": "online", 00:30:15.039 "raid_level": "raid1", 00:30:15.039 "superblock": true, 00:30:15.039 "num_base_bdevs": 2, 00:30:15.039 "num_base_bdevs_discovered": 2, 00:30:15.039 "num_base_bdevs_operational": 2, 00:30:15.039 "base_bdevs_list": [ 00:30:15.039 { 00:30:15.039 "name": "spare", 00:30:15.039 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:15.039 "is_configured": true, 00:30:15.039 "data_offset": 2048, 00:30:15.039 "data_size": 63488 00:30:15.039 }, 00:30:15.039 { 00:30:15.039 "name": "BaseBdev2", 00:30:15.039 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:15.039 "is_configured": true, 00:30:15.039 "data_offset": 2048, 00:30:15.039 "data_size": 63488 00:30:15.039 } 00:30:15.039 ] 00:30:15.039 }' 00:30:15.039 14:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:15.039 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:15.039 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:15.039 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:15.039 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.040 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.620 "name": "raid_bdev1", 00:30:15.620 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:15.620 "strip_size_kb": 0, 00:30:15.620 "state": "online", 00:30:15.620 "raid_level": "raid1", 00:30:15.620 "superblock": true, 00:30:15.620 "num_base_bdevs": 2, 00:30:15.620 "num_base_bdevs_discovered": 2, 00:30:15.620 "num_base_bdevs_operational": 2, 00:30:15.620 "base_bdevs_list": [ 00:30:15.620 { 00:30:15.620 "name": "spare", 00:30:15.620 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:15.620 "is_configured": true, 00:30:15.620 "data_offset": 2048, 00:30:15.620 "data_size": 63488 00:30:15.620 }, 00:30:15.620 { 00:30:15.620 "name": "BaseBdev2", 00:30:15.620 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:15.620 "is_configured": true, 00:30:15.620 "data_offset": 2048, 00:30:15.620 "data_size": 63488 00:30:15.620 } 00:30:15.620 ] 00:30:15.620 }' 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.620 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.901 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.901 "name": "raid_bdev1", 00:30:15.901 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:15.901 "strip_size_kb": 0, 00:30:15.901 "state": "online", 00:30:15.901 "raid_level": "raid1", 00:30:15.901 "superblock": true, 00:30:15.901 "num_base_bdevs": 2, 00:30:15.901 "num_base_bdevs_discovered": 2, 00:30:15.901 "num_base_bdevs_operational": 2, 00:30:15.901 "base_bdevs_list": [ 00:30:15.901 { 00:30:15.901 "name": "spare", 00:30:15.901 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:15.901 "is_configured": true, 00:30:15.901 "data_offset": 2048, 00:30:15.901 "data_size": 63488 00:30:15.901 }, 00:30:15.901 { 00:30:15.901 "name": "BaseBdev2", 00:30:15.901 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:15.901 "is_configured": true, 00:30:15.901 "data_offset": 2048, 00:30:15.901 "data_size": 63488 00:30:15.901 } 00:30:15.901 ] 00:30:15.901 }' 00:30:15.901 14:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.901 14:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.466 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:16.724 [2024-07-25 14:13:05.625896] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:16.724 [2024-07-25 14:13:05.626137] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:16.724 [2024-07-25 14:13:05.626356] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:16.724 [2024-07-25 14:13:05.626548] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:16.724 [2024-07-25 14:13:05.626672] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:30:16.724 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.724 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:16.983 14:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:17.241 /dev/nbd0 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.241 1+0 records in 00:30:17.241 1+0 records out 00:30:17.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621608 s, 6.6 MB/s 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:17.241 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:17.499 /dev/nbd1 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.499 1+0 records in 00:30:17.499 1+0 records out 00:30:17.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528005 s, 7.8 MB/s 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:17.499 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:17.500 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:17.758 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:18.016 14:13:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:18.273 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:18.531 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:18.789 [2024-07-25 14:13:07.757422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:18.789 [2024-07-25 14:13:07.757674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.789 [2024-07-25 14:13:07.757892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:18.789 [2024-07-25 14:13:07.758020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.789 [2024-07-25 14:13:07.760659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.789 [2024-07-25 14:13:07.760831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:18.789 [2024-07-25 14:13:07.761067] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:18.789 [2024-07-25 14:13:07.761274] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:18.789 [2024-07-25 14:13:07.761577] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:18.789 spare 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.789 14:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.047 [2024-07-25 14:13:07.861851] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:30:19.047 [2024-07-25 14:13:07.862020] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:19.047 [2024-07-25 14:13:07.862284] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:30:19.047 [2024-07-25 14:13:07.862826] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:30:19.047 [2024-07-25 14:13:07.862945] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:30:19.047 [2024-07-25 14:13:07.863213] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:19.047 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:19.047 "name": "raid_bdev1", 00:30:19.047 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:19.047 "strip_size_kb": 0, 00:30:19.047 "state": "online", 00:30:19.047 "raid_level": "raid1", 00:30:19.047 "superblock": true, 00:30:19.047 "num_base_bdevs": 2, 00:30:19.047 "num_base_bdevs_discovered": 2, 00:30:19.047 "num_base_bdevs_operational": 2, 00:30:19.047 "base_bdevs_list": [ 00:30:19.047 { 00:30:19.047 "name": "spare", 00:30:19.047 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:19.047 "is_configured": true, 00:30:19.047 "data_offset": 2048, 00:30:19.047 "data_size": 63488 00:30:19.047 }, 00:30:19.047 { 00:30:19.047 "name": "BaseBdev2", 00:30:19.047 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:19.047 "is_configured": true, 00:30:19.047 "data_offset": 2048, 00:30:19.047 "data_size": 63488 00:30:19.047 } 00:30:19.047 ] 00:30:19.047 }' 00:30:19.047 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:19.047 14:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.613 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.872 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.130 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.130 "name": "raid_bdev1", 00:30:20.130 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:20.130 "strip_size_kb": 0, 00:30:20.130 "state": "online", 00:30:20.130 "raid_level": "raid1", 00:30:20.130 "superblock": true, 00:30:20.130 "num_base_bdevs": 2, 00:30:20.130 "num_base_bdevs_discovered": 2, 00:30:20.130 "num_base_bdevs_operational": 2, 00:30:20.130 "base_bdevs_list": [ 00:30:20.130 { 00:30:20.130 "name": "spare", 00:30:20.130 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:20.130 "is_configured": true, 00:30:20.130 "data_offset": 2048, 00:30:20.130 "data_size": 63488 00:30:20.130 }, 00:30:20.130 { 00:30:20.130 "name": "BaseBdev2", 00:30:20.130 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:20.130 "is_configured": true, 00:30:20.130 "data_offset": 2048, 00:30:20.130 "data_size": 63488 00:30:20.130 } 00:30:20.130 ] 00:30:20.130 }' 00:30:20.130 14:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.130 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:20.130 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.130 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:20.130 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.130 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:20.388 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.388 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:20.646 [2024-07-25 14:13:09.590492] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.646 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.904 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:20.904 "name": "raid_bdev1", 00:30:20.904 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:20.904 "strip_size_kb": 0, 00:30:20.904 "state": "online", 00:30:20.904 "raid_level": "raid1", 00:30:20.904 "superblock": true, 00:30:20.904 "num_base_bdevs": 2, 00:30:20.904 "num_base_bdevs_discovered": 1, 00:30:20.904 "num_base_bdevs_operational": 1, 00:30:20.904 "base_bdevs_list": [ 00:30:20.904 { 00:30:20.904 "name": null, 00:30:20.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.904 "is_configured": false, 00:30:20.904 "data_offset": 2048, 00:30:20.904 "data_size": 63488 00:30:20.904 }, 00:30:20.904 { 00:30:20.904 "name": "BaseBdev2", 00:30:20.904 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:20.904 "is_configured": true, 00:30:20.904 "data_offset": 2048, 00:30:20.904 "data_size": 63488 00:30:20.904 } 00:30:20.904 ] 00:30:20.904 }' 00:30:20.904 14:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:20.904 14:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.470 14:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:21.728 [2024-07-25 14:13:10.694810] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:21.728 [2024-07-25 14:13:10.695252] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:21.728 [2024-07-25 14:13:10.695390] bdev_raid.c:3816:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:21.728 [2024-07-25 14:13:10.695571] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:21.728 [2024-07-25 14:13:10.709969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:30:21.728 [2024-07-25 14:13:10.712255] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:21.728 14:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:23.104 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.104 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.104 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.104 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.104 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.105 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.105 14:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.105 "name": "raid_bdev1", 00:30:23.105 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:23.105 "strip_size_kb": 0, 00:30:23.105 "state": "online", 00:30:23.105 "raid_level": "raid1", 00:30:23.105 "superblock": true, 00:30:23.105 "num_base_bdevs": 2, 00:30:23.105 "num_base_bdevs_discovered": 2, 00:30:23.105 "num_base_bdevs_operational": 2, 00:30:23.105 "process": { 00:30:23.105 "type": "rebuild", 00:30:23.105 "target": "spare", 00:30:23.105 "progress": { 00:30:23.105 "blocks": 24576, 00:30:23.105 "percent": 38 00:30:23.105 } 00:30:23.105 }, 00:30:23.105 "base_bdevs_list": [ 00:30:23.105 { 00:30:23.105 "name": "spare", 00:30:23.105 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:23.105 "is_configured": true, 00:30:23.105 "data_offset": 2048, 00:30:23.105 "data_size": 63488 00:30:23.105 }, 00:30:23.105 { 00:30:23.105 "name": "BaseBdev2", 00:30:23.105 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:23.105 "is_configured": true, 00:30:23.105 "data_offset": 2048, 00:30:23.105 "data_size": 63488 00:30:23.105 } 00:30:23.105 ] 00:30:23.105 }' 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.105 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:23.363 [2024-07-25 14:13:12.390177] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.621 [2024-07-25 14:13:12.422819] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:23.621 [2024-07-25 14:13:12.423277] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:23.621 [2024-07-25 14:13:12.423464] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.621 [2024-07-25 14:13:12.423585] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.621 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.878 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:23.878 "name": "raid_bdev1", 00:30:23.878 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:23.878 "strip_size_kb": 0, 00:30:23.878 "state": "online", 00:30:23.878 "raid_level": "raid1", 00:30:23.878 "superblock": true, 00:30:23.878 "num_base_bdevs": 2, 00:30:23.878 "num_base_bdevs_discovered": 1, 00:30:23.878 "num_base_bdevs_operational": 1, 00:30:23.878 "base_bdevs_list": [ 00:30:23.878 { 00:30:23.878 "name": null, 00:30:23.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.878 "is_configured": false, 00:30:23.878 "data_offset": 2048, 00:30:23.878 "data_size": 63488 00:30:23.878 }, 00:30:23.878 { 00:30:23.878 "name": "BaseBdev2", 00:30:23.878 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:23.878 "is_configured": true, 00:30:23.878 "data_offset": 2048, 00:30:23.878 "data_size": 63488 00:30:23.878 } 00:30:23.878 ] 00:30:23.878 }' 00:30:23.878 14:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:23.878 14:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.444 14:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:24.701 [2024-07-25 14:13:13.663170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:24.701 [2024-07-25 14:13:13.663514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.701 [2024-07-25 14:13:13.663673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:24.701 [2024-07-25 14:13:13.663803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.701 [2024-07-25 14:13:13.664513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.701 [2024-07-25 14:13:13.664682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:24.701 [2024-07-25 14:13:13.664915] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:24.701 [2024-07-25 14:13:13.665041] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:24.701 [2024-07-25 14:13:13.665158] bdev_raid.c:3816:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:24.701 [2024-07-25 14:13:13.665343] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:24.701 [2024-07-25 14:13:13.679528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:30:24.701 spare 00:30:24.701 [2024-07-25 14:13:13.681802] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:24.701 14:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.073 "name": "raid_bdev1", 00:30:26.073 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:26.073 "strip_size_kb": 0, 00:30:26.073 "state": "online", 00:30:26.073 "raid_level": "raid1", 00:30:26.073 "superblock": true, 00:30:26.073 "num_base_bdevs": 2, 00:30:26.073 "num_base_bdevs_discovered": 2, 00:30:26.073 "num_base_bdevs_operational": 2, 00:30:26.073 "process": { 00:30:26.073 "type": "rebuild", 00:30:26.073 "target": "spare", 00:30:26.073 "progress": { 00:30:26.073 "blocks": 24576, 00:30:26.073 "percent": 38 00:30:26.073 } 00:30:26.073 }, 00:30:26.073 "base_bdevs_list": [ 00:30:26.073 { 00:30:26.073 "name": "spare", 00:30:26.073 "uuid": "b86de9d8-6014-5662-a7a2-4e47ec5ab8ff", 00:30:26.073 "is_configured": true, 00:30:26.073 "data_offset": 2048, 00:30:26.073 "data_size": 63488 00:30:26.073 }, 00:30:26.073 { 00:30:26.073 "name": "BaseBdev2", 00:30:26.073 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:26.073 "is_configured": true, 00:30:26.073 "data_offset": 2048, 00:30:26.073 "data_size": 63488 00:30:26.073 } 00:30:26.073 ] 00:30:26.073 }' 00:30:26.073 14:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:26.073 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:26.073 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:26.073 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:26.073 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:26.331 [2024-07-25 14:13:15.315466] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:26.589 [2024-07-25 14:13:15.391903] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:26.589 [2024-07-25 14:13:15.392144] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.589 [2024-07-25 14:13:15.392280] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:26.589 [2024-07-25 14:13:15.392328] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.589 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.846 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.846 "name": "raid_bdev1", 00:30:26.846 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:26.846 "strip_size_kb": 0, 00:30:26.846 "state": "online", 00:30:26.846 "raid_level": "raid1", 00:30:26.846 "superblock": true, 00:30:26.846 "num_base_bdevs": 2, 00:30:26.846 "num_base_bdevs_discovered": 1, 00:30:26.846 "num_base_bdevs_operational": 1, 00:30:26.846 "base_bdevs_list": [ 00:30:26.846 { 00:30:26.846 "name": null, 00:30:26.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.846 "is_configured": false, 00:30:26.846 "data_offset": 2048, 00:30:26.846 "data_size": 63488 00:30:26.846 }, 00:30:26.846 { 00:30:26.846 "name": "BaseBdev2", 00:30:26.846 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:26.846 "is_configured": true, 00:30:26.846 "data_offset": 2048, 00:30:26.846 "data_size": 63488 00:30:26.846 } 00:30:26.846 ] 00:30:26.846 }' 00:30:26.846 14:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.846 14:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.411 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.668 "name": "raid_bdev1", 00:30:27.668 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:27.668 "strip_size_kb": 0, 00:30:27.668 "state": "online", 00:30:27.668 "raid_level": "raid1", 00:30:27.668 "superblock": true, 00:30:27.668 "num_base_bdevs": 2, 00:30:27.668 "num_base_bdevs_discovered": 1, 00:30:27.668 "num_base_bdevs_operational": 1, 00:30:27.668 "base_bdevs_list": [ 00:30:27.668 { 00:30:27.668 "name": null, 00:30:27.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.668 "is_configured": false, 00:30:27.668 "data_offset": 2048, 00:30:27.668 "data_size": 63488 00:30:27.668 }, 00:30:27.668 { 00:30:27.668 "name": "BaseBdev2", 00:30:27.668 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:27.668 "is_configured": true, 00:30:27.668 "data_offset": 2048, 00:30:27.668 "data_size": 63488 00:30:27.668 } 00:30:27.668 ] 00:30:27.668 }' 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:27.668 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:27.925 14:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:28.183 [2024-07-25 14:13:17.152869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:28.183 [2024-07-25 14:13:17.153162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.183 [2024-07-25 14:13:17.153260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:28.183 [2024-07-25 14:13:17.153478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.183 [2024-07-25 14:13:17.154045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.183 [2024-07-25 14:13:17.154208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:28.183 [2024-07-25 14:13:17.154499] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:28.183 [2024-07-25 14:13:17.154628] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:28.183 [2024-07-25 14:13:17.154735] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:28.183 BaseBdev1 00:30:28.183 14:13:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.556 "name": "raid_bdev1", 00:30:29.556 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:29.556 "strip_size_kb": 0, 00:30:29.556 "state": "online", 00:30:29.556 "raid_level": "raid1", 00:30:29.556 "superblock": true, 00:30:29.556 "num_base_bdevs": 2, 00:30:29.556 "num_base_bdevs_discovered": 1, 00:30:29.556 "num_base_bdevs_operational": 1, 00:30:29.556 "base_bdevs_list": [ 00:30:29.556 { 00:30:29.556 "name": null, 00:30:29.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.556 "is_configured": false, 00:30:29.556 "data_offset": 2048, 00:30:29.556 "data_size": 63488 00:30:29.556 }, 00:30:29.556 { 00:30:29.556 "name": "BaseBdev2", 00:30:29.556 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:29.556 "is_configured": true, 00:30:29.556 "data_offset": 2048, 00:30:29.556 "data_size": 63488 00:30:29.556 } 00:30:29.556 ] 00:30:29.556 }' 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.556 14:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.122 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.379 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.379 "name": "raid_bdev1", 00:30:30.379 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:30.379 "strip_size_kb": 0, 00:30:30.379 "state": "online", 00:30:30.379 "raid_level": "raid1", 00:30:30.379 "superblock": true, 00:30:30.379 "num_base_bdevs": 2, 00:30:30.379 "num_base_bdevs_discovered": 1, 00:30:30.379 "num_base_bdevs_operational": 1, 00:30:30.379 "base_bdevs_list": [ 00:30:30.379 { 00:30:30.379 "name": null, 00:30:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.379 "is_configured": false, 00:30:30.379 "data_offset": 2048, 00:30:30.379 "data_size": 63488 00:30:30.379 }, 00:30:30.379 { 00:30:30.379 "name": "BaseBdev2", 00:30:30.379 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:30.379 "is_configured": true, 00:30:30.379 "data_offset": 2048, 00:30:30.379 "data_size": 63488 00:30:30.379 } 00:30:30.379 ] 00:30:30.379 }' 00:30:30.379 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:30.653 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:30.929 [2024-07-25 14:13:19.777549] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:30.930 [2024-07-25 14:13:19.777964] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:30.930 [2024-07-25 14:13:19.778098] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:30.930 request: 00:30:30.930 { 00:30:30.930 "base_bdev": "BaseBdev1", 00:30:30.930 "raid_bdev": "raid_bdev1", 00:30:30.930 "skip_rebuild": false, 00:30:30.930 "method": "bdev_raid_add_base_bdev", 00:30:30.930 "req_id": 1 00:30:30.930 } 00:30:30.930 Got JSON-RPC error response 00:30:30.930 response: 00:30:30.930 { 00:30:30.930 "code": -22, 00:30:30.930 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:30.930 } 00:30:30.930 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:30:30.930 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:30.930 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:30.930 14:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:30.930 14:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.862 14:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.120 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.120 "name": "raid_bdev1", 00:30:32.120 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:32.120 "strip_size_kb": 0, 00:30:32.120 "state": "online", 00:30:32.120 "raid_level": "raid1", 00:30:32.120 "superblock": true, 00:30:32.120 "num_base_bdevs": 2, 00:30:32.120 "num_base_bdevs_discovered": 1, 00:30:32.120 "num_base_bdevs_operational": 1, 00:30:32.120 "base_bdevs_list": [ 00:30:32.120 { 00:30:32.120 "name": null, 00:30:32.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.120 "is_configured": false, 00:30:32.120 "data_offset": 2048, 00:30:32.120 "data_size": 63488 00:30:32.120 }, 00:30:32.120 { 00:30:32.120 "name": "BaseBdev2", 00:30:32.120 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:32.120 "is_configured": true, 00:30:32.120 "data_offset": 2048, 00:30:32.120 "data_size": 63488 00:30:32.120 } 00:30:32.120 ] 00:30:32.120 }' 00:30:32.120 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.120 14:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.053 14:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.053 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.053 "name": "raid_bdev1", 00:30:33.053 "uuid": "bc808370-cc08-48a1-8aea-c02ea015b7f9", 00:30:33.053 "strip_size_kb": 0, 00:30:33.053 "state": "online", 00:30:33.053 "raid_level": "raid1", 00:30:33.053 "superblock": true, 00:30:33.053 "num_base_bdevs": 2, 00:30:33.053 "num_base_bdevs_discovered": 1, 00:30:33.053 "num_base_bdevs_operational": 1, 00:30:33.053 "base_bdevs_list": [ 00:30:33.053 { 00:30:33.053 "name": null, 00:30:33.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.053 "is_configured": false, 00:30:33.053 "data_offset": 2048, 00:30:33.053 "data_size": 63488 00:30:33.053 }, 00:30:33.053 { 00:30:33.053 "name": "BaseBdev2", 00:30:33.053 "uuid": "8e70768f-e69c-5ff1-8917-21dec0ce7dbc", 00:30:33.053 "is_configured": true, 00:30:33.053 "data_offset": 2048, 00:30:33.053 "data_size": 63488 00:30:33.053 } 00:30:33.053 ] 00:30:33.053 }' 00:30:33.053 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 144872 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 144872 ']' 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 144872 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144872 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144872' 00:30:33.312 killing process with pid 144872 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 144872 00:30:33.312 Received shutdown signal, test time was about 60.000000 seconds 00:30:33.312 00:30:33.312 Latency(us) 00:30:33.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.312 =================================================================================================================== 00:30:33.312 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:33.312 14:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 144872 00:30:33.312 [2024-07-25 14:13:22.180577] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:33.312 [2024-07-25 14:13:22.180808] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:33.312 [2024-07-25 14:13:22.180970] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:33.312 [2024-07-25 14:13:22.181072] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:30:33.570 [2024-07-25 14:13:22.443686] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:34.947 ************************************ 00:30:34.947 END TEST raid_rebuild_test_sb 00:30:34.947 ************************************ 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:30:34.947 00:30:34.947 real 0m40.407s 00:30:34.947 user 1m0.794s 00:30:34.947 sys 0m5.608s 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.947 14:13:23 bdev_raid -- bdev/bdev_raid.sh@1033 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:30:34.947 14:13:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:34.947 14:13:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.947 14:13:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:34.947 ************************************ 00:30:34.947 START TEST raid_rebuild_test_io 00:30:34.947 ************************************ 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=145849 00:30:34.947 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 145849 /var/tmp/spdk-raid.sock 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 145849 ']' 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:34.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.948 14:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:34.948 [2024-07-25 14:13:23.703390] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:30:34.948 [2024-07-25 14:13:23.704273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145849 ] 00:30:34.948 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:34.948 Zero copy mechanism will not be used. 00:30:34.948 [2024-07-25 14:13:23.873323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.206 [2024-07-25 14:13:24.127401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.464 [2024-07-25 14:13:24.340989] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:35.722 14:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:35.722 14:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:30:35.722 14:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:35.722 14:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:35.980 BaseBdev1_malloc 00:30:35.980 14:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:36.544 [2024-07-25 14:13:25.282446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:36.544 [2024-07-25 14:13:25.282772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.544 [2024-07-25 14:13:25.282953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:36.544 [2024-07-25 14:13:25.283083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.544 [2024-07-25 14:13:25.285814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.544 [2024-07-25 14:13:25.285988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:36.544 BaseBdev1 00:30:36.544 14:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:36.544 14:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:36.544 BaseBdev2_malloc 00:30:36.802 14:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:37.060 [2024-07-25 14:13:25.877896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:37.060 [2024-07-25 14:13:25.878270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.060 [2024-07-25 14:13:25.878476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:37.060 [2024-07-25 14:13:25.878615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.060 [2024-07-25 14:13:25.881239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.060 [2024-07-25 14:13:25.881444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:37.060 BaseBdev2 00:30:37.060 14:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:37.318 spare_malloc 00:30:37.318 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:37.576 spare_delay 00:30:37.576 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:37.833 [2024-07-25 14:13:26.693546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:37.833 [2024-07-25 14:13:26.693848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.833 [2024-07-25 14:13:26.694022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:37.833 [2024-07-25 14:13:26.694154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.833 [2024-07-25 14:13:26.696875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.833 [2024-07-25 14:13:26.697062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:37.833 spare 00:30:37.833 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:38.091 [2024-07-25 14:13:26.937693] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.091 [2024-07-25 14:13:26.940053] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:38.091 [2024-07-25 14:13:26.940303] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:30:38.091 [2024-07-25 14:13:26.940432] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:38.091 [2024-07-25 14:13:26.940646] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:38.091 [2024-07-25 14:13:26.941202] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:30:38.091 [2024-07-25 14:13:26.941343] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:30:38.091 [2024-07-25 14:13:26.941697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.091 14:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.350 14:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.350 "name": "raid_bdev1", 00:30:38.350 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:38.350 "strip_size_kb": 0, 00:30:38.350 "state": "online", 00:30:38.350 "raid_level": "raid1", 00:30:38.350 "superblock": false, 00:30:38.350 "num_base_bdevs": 2, 00:30:38.350 "num_base_bdevs_discovered": 2, 00:30:38.350 "num_base_bdevs_operational": 2, 00:30:38.350 "base_bdevs_list": [ 00:30:38.350 { 00:30:38.350 "name": "BaseBdev1", 00:30:38.350 "uuid": "96edf79d-3bdd-56fa-92d3-7db4cda35f6a", 00:30:38.350 "is_configured": true, 00:30:38.350 "data_offset": 0, 00:30:38.350 "data_size": 65536 00:30:38.350 }, 00:30:38.350 { 00:30:38.350 "name": "BaseBdev2", 00:30:38.350 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:38.350 "is_configured": true, 00:30:38.350 "data_offset": 0, 00:30:38.350 "data_size": 65536 00:30:38.350 } 00:30:38.350 ] 00:30:38.350 }' 00:30:38.350 14:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.350 14:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.916 14:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:38.916 14:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:39.174 [2024-07-25 14:13:28.126251] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:39.174 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:30:39.174 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.174 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:39.432 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:30:39.432 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:39.432 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:39.432 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:39.690 [2024-07-25 14:13:28.489901] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:39.690 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:39.690 Zero copy mechanism will not be used. 00:30:39.690 Running I/O for 60 seconds... 00:30:39.690 [2024-07-25 14:13:28.670600] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:39.690 [2024-07-25 14:13:28.678464] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.690 14:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.256 14:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.256 "name": "raid_bdev1", 00:30:40.256 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:40.256 "strip_size_kb": 0, 00:30:40.256 "state": "online", 00:30:40.256 "raid_level": "raid1", 00:30:40.256 "superblock": false, 00:30:40.256 "num_base_bdevs": 2, 00:30:40.256 "num_base_bdevs_discovered": 1, 00:30:40.256 "num_base_bdevs_operational": 1, 00:30:40.256 "base_bdevs_list": [ 00:30:40.256 { 00:30:40.256 "name": null, 00:30:40.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.256 "is_configured": false, 00:30:40.256 "data_offset": 0, 00:30:40.256 "data_size": 65536 00:30:40.256 }, 00:30:40.256 { 00:30:40.256 "name": "BaseBdev2", 00:30:40.256 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:40.256 "is_configured": true, 00:30:40.256 "data_offset": 0, 00:30:40.256 "data_size": 65536 00:30:40.256 } 00:30:40.256 ] 00:30:40.256 }' 00:30:40.256 14:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.256 14:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.822 14:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:41.079 [2024-07-25 14:13:29.949113] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:41.079 14:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:41.079 [2024-07-25 14:13:30.004080] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:41.079 [2024-07-25 14:13:30.006336] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:41.079 [2024-07-25 14:13:30.115834] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:41.079 [2024-07-25 14:13:30.116757] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:41.337 [2024-07-25 14:13:30.327722] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:41.337 [2024-07-25 14:13:30.328272] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:41.903 [2024-07-25 14:13:30.686730] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:41.903 [2024-07-25 14:13:30.912195] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:41.903 [2024-07-25 14:13:30.912699] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:42.160 14:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:42.160 14:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.161 14:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:42.161 14:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:42.161 14:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.161 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.161 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.418 [2024-07-25 14:13:31.271789] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.418 "name": "raid_bdev1", 00:30:42.418 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:42.418 "strip_size_kb": 0, 00:30:42.418 "state": "online", 00:30:42.418 "raid_level": "raid1", 00:30:42.418 "superblock": false, 00:30:42.418 "num_base_bdevs": 2, 00:30:42.418 "num_base_bdevs_discovered": 2, 00:30:42.418 "num_base_bdevs_operational": 2, 00:30:42.418 "process": { 00:30:42.418 "type": "rebuild", 00:30:42.418 "target": "spare", 00:30:42.418 "progress": { 00:30:42.418 "blocks": 12288, 00:30:42.418 "percent": 18 00:30:42.418 } 00:30:42.418 }, 00:30:42.418 "base_bdevs_list": [ 00:30:42.418 { 00:30:42.418 "name": "spare", 00:30:42.418 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:42.418 "is_configured": true, 00:30:42.418 "data_offset": 0, 00:30:42.418 "data_size": 65536 00:30:42.418 }, 00:30:42.418 { 00:30:42.418 "name": "BaseBdev2", 00:30:42.418 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:42.418 "is_configured": true, 00:30:42.418 "data_offset": 0, 00:30:42.418 "data_size": 65536 00:30:42.418 } 00:30:42.418 ] 00:30:42.418 }' 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.418 [2024-07-25 14:13:31.396172] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:42.418 [2024-07-25 14:13:31.396784] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:42.418 14:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:42.675 [2024-07-25 14:13:31.678004] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.932 [2024-07-25 14:13:31.758834] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:42.932 [2024-07-25 14:13:31.876435] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:42.933 [2024-07-25 14:13:31.908263] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.933 [2024-07-25 14:13:31.909048] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:42.933 [2024-07-25 14:13:31.909122] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:42.933 [2024-07-25 14:13:31.955031] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.189 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.754 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.754 "name": "raid_bdev1", 00:30:43.754 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:43.754 "strip_size_kb": 0, 00:30:43.754 "state": "online", 00:30:43.754 "raid_level": "raid1", 00:30:43.754 "superblock": false, 00:30:43.754 "num_base_bdevs": 2, 00:30:43.754 "num_base_bdevs_discovered": 1, 00:30:43.754 "num_base_bdevs_operational": 1, 00:30:43.754 "base_bdevs_list": [ 00:30:43.754 { 00:30:43.754 "name": null, 00:30:43.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.754 "is_configured": false, 00:30:43.754 "data_offset": 0, 00:30:43.754 "data_size": 65536 00:30:43.754 }, 00:30:43.754 { 00:30:43.754 "name": "BaseBdev2", 00:30:43.754 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:43.754 "is_configured": true, 00:30:43.754 "data_offset": 0, 00:30:43.754 "data_size": 65536 00:30:43.754 } 00:30:43.754 ] 00:30:43.754 }' 00:30:43.754 14:13:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.754 14:13:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.318 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.574 "name": "raid_bdev1", 00:30:44.574 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:44.574 "strip_size_kb": 0, 00:30:44.574 "state": "online", 00:30:44.574 "raid_level": "raid1", 00:30:44.574 "superblock": false, 00:30:44.574 "num_base_bdevs": 2, 00:30:44.574 "num_base_bdevs_discovered": 1, 00:30:44.574 "num_base_bdevs_operational": 1, 00:30:44.574 "base_bdevs_list": [ 00:30:44.574 { 00:30:44.574 "name": null, 00:30:44.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.574 "is_configured": false, 00:30:44.574 "data_offset": 0, 00:30:44.574 "data_size": 65536 00:30:44.574 }, 00:30:44.574 { 00:30:44.574 "name": "BaseBdev2", 00:30:44.574 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:44.574 "is_configured": true, 00:30:44.574 "data_offset": 0, 00:30:44.574 "data_size": 65536 00:30:44.574 } 00:30:44.574 ] 00:30:44.574 }' 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:44.574 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:44.832 [2024-07-25 14:13:33.867333] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:45.089 14:13:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:45.089 [2024-07-25 14:13:33.948740] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:45.089 [2024-07-25 14:13:33.951046] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:45.089 [2024-07-25 14:13:34.060993] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:45.089 [2024-07-25 14:13:34.061790] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:45.347 [2024-07-25 14:13:34.280796] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:45.347 [2024-07-25 14:13:34.281370] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:45.604 [2024-07-25 14:13:34.638063] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.170 14:13:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.170 [2024-07-25 14:13:35.000316] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:46.170 [2024-07-25 14:13:35.001021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.427 "name": "raid_bdev1", 00:30:46.427 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:46.427 "strip_size_kb": 0, 00:30:46.427 "state": "online", 00:30:46.427 "raid_level": "raid1", 00:30:46.427 "superblock": false, 00:30:46.427 "num_base_bdevs": 2, 00:30:46.427 "num_base_bdevs_discovered": 2, 00:30:46.427 "num_base_bdevs_operational": 2, 00:30:46.427 "process": { 00:30:46.427 "type": "rebuild", 00:30:46.427 "target": "spare", 00:30:46.427 "progress": { 00:30:46.427 "blocks": 18432, 00:30:46.427 "percent": 28 00:30:46.427 } 00:30:46.427 }, 00:30:46.427 "base_bdevs_list": [ 00:30:46.427 { 00:30:46.427 "name": "spare", 00:30:46.427 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:46.427 "is_configured": true, 00:30:46.427 "data_offset": 0, 00:30:46.427 "data_size": 65536 00:30:46.427 }, 00:30:46.427 { 00:30:46.427 "name": "BaseBdev2", 00:30:46.427 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:46.427 "is_configured": true, 00:30:46.427 "data_offset": 0, 00:30:46.427 "data_size": 65536 00:30:46.427 } 00:30:46.427 ] 00:30:46.427 }' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=981 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.427 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.684 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.684 "name": "raid_bdev1", 00:30:46.684 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:46.684 "strip_size_kb": 0, 00:30:46.684 "state": "online", 00:30:46.684 "raid_level": "raid1", 00:30:46.684 "superblock": false, 00:30:46.684 "num_base_bdevs": 2, 00:30:46.684 "num_base_bdevs_discovered": 2, 00:30:46.684 "num_base_bdevs_operational": 2, 00:30:46.684 "process": { 00:30:46.684 "type": "rebuild", 00:30:46.684 "target": "spare", 00:30:46.684 "progress": { 00:30:46.684 "blocks": 26624, 00:30:46.684 "percent": 40 00:30:46.684 } 00:30:46.684 }, 00:30:46.684 "base_bdevs_list": [ 00:30:46.684 { 00:30:46.684 "name": "spare", 00:30:46.684 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:46.685 "is_configured": true, 00:30:46.685 "data_offset": 0, 00:30:46.685 "data_size": 65536 00:30:46.685 }, 00:30:46.685 { 00:30:46.685 "name": "BaseBdev2", 00:30:46.685 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:46.685 "is_configured": true, 00:30:46.685 "data_offset": 0, 00:30:46.685 "data_size": 65536 00:30:46.685 } 00:30:46.685 ] 00:30:46.685 }' 00:30:46.685 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.685 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:46.685 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.941 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.941 14:13:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:46.941 [2024-07-25 14:13:35.871785] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:47.198 [2024-07-25 14:13:36.220162] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:47.456 [2024-07-25 14:13:36.430892] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:47.456 [2024-07-25 14:13:36.431420] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.713 14:13:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.978 [2024-07-25 14:13:36.878746] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:48.248 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.248 "name": "raid_bdev1", 00:30:48.248 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:48.248 "strip_size_kb": 0, 00:30:48.248 "state": "online", 00:30:48.248 "raid_level": "raid1", 00:30:48.248 "superblock": false, 00:30:48.248 "num_base_bdevs": 2, 00:30:48.248 "num_base_bdevs_discovered": 2, 00:30:48.248 "num_base_bdevs_operational": 2, 00:30:48.248 "process": { 00:30:48.248 "type": "rebuild", 00:30:48.248 "target": "spare", 00:30:48.248 "progress": { 00:30:48.248 "blocks": 47104, 00:30:48.248 "percent": 71 00:30:48.248 } 00:30:48.248 }, 00:30:48.248 "base_bdevs_list": [ 00:30:48.248 { 00:30:48.248 "name": "spare", 00:30:48.248 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:48.248 "is_configured": true, 00:30:48.248 "data_offset": 0, 00:30:48.248 "data_size": 65536 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "name": "BaseBdev2", 00:30:48.248 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:48.248 "is_configured": true, 00:30:48.248 "data_offset": 0, 00:30:48.248 "data_size": 65536 00:30:48.248 } 00:30:48.248 ] 00:30:48.248 }' 00:30:48.249 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.249 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:48.249 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.249 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:48.249 14:13:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:48.249 [2024-07-25 14:13:37.210217] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:49.181 [2024-07-25 14:13:38.004758] bdev_raid.c:2894:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:49.181 [2024-07-25 14:13:38.112181] bdev_raid.c:2556:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:49.181 [2024-07-25 14:13:38.114980] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.181 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.439 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:49.439 "name": "raid_bdev1", 00:30:49.439 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:49.439 "strip_size_kb": 0, 00:30:49.439 "state": "online", 00:30:49.439 "raid_level": "raid1", 00:30:49.439 "superblock": false, 00:30:49.439 "num_base_bdevs": 2, 00:30:49.439 "num_base_bdevs_discovered": 2, 00:30:49.439 "num_base_bdevs_operational": 2, 00:30:49.439 "base_bdevs_list": [ 00:30:49.439 { 00:30:49.439 "name": "spare", 00:30:49.439 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:49.439 "is_configured": true, 00:30:49.439 "data_offset": 0, 00:30:49.439 "data_size": 65536 00:30:49.439 }, 00:30:49.439 { 00:30:49.439 "name": "BaseBdev2", 00:30:49.439 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:49.439 "is_configured": true, 00:30:49.439 "data_offset": 0, 00:30:49.439 "data_size": 65536 00:30:49.439 } 00:30:49.439 ] 00:30:49.439 }' 00:30:49.440 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.697 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:49.955 "name": "raid_bdev1", 00:30:49.955 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:49.955 "strip_size_kb": 0, 00:30:49.955 "state": "online", 00:30:49.955 "raid_level": "raid1", 00:30:49.955 "superblock": false, 00:30:49.955 "num_base_bdevs": 2, 00:30:49.955 "num_base_bdevs_discovered": 2, 00:30:49.955 "num_base_bdevs_operational": 2, 00:30:49.955 "base_bdevs_list": [ 00:30:49.955 { 00:30:49.955 "name": "spare", 00:30:49.955 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:49.955 "is_configured": true, 00:30:49.955 "data_offset": 0, 00:30:49.955 "data_size": 65536 00:30:49.955 }, 00:30:49.955 { 00:30:49.955 "name": "BaseBdev2", 00:30:49.955 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:49.955 "is_configured": true, 00:30:49.955 "data_offset": 0, 00:30:49.955 "data_size": 65536 00:30:49.955 } 00:30:49.955 ] 00:30:49.955 }' 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.955 14:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.212 14:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:50.212 "name": "raid_bdev1", 00:30:50.212 "uuid": "d1ee862c-7417-4e2b-9360-7ad792deaf5e", 00:30:50.212 "strip_size_kb": 0, 00:30:50.212 "state": "online", 00:30:50.212 "raid_level": "raid1", 00:30:50.212 "superblock": false, 00:30:50.212 "num_base_bdevs": 2, 00:30:50.212 "num_base_bdevs_discovered": 2, 00:30:50.212 "num_base_bdevs_operational": 2, 00:30:50.212 "base_bdevs_list": [ 00:30:50.212 { 00:30:50.212 "name": "spare", 00:30:50.212 "uuid": "125b7fb8-34da-59bb-a12a-24fcce3d3e19", 00:30:50.212 "is_configured": true, 00:30:50.212 "data_offset": 0, 00:30:50.212 "data_size": 65536 00:30:50.212 }, 00:30:50.212 { 00:30:50.212 "name": "BaseBdev2", 00:30:50.212 "uuid": "149377a9-bd41-5c38-8e41-1cb0effdac01", 00:30:50.212 "is_configured": true, 00:30:50.212 "data_offset": 0, 00:30:50.212 "data_size": 65536 00:30:50.212 } 00:30:50.212 ] 00:30:50.212 }' 00:30:50.212 14:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:50.212 14:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:51.147 14:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:51.147 [2024-07-25 14:13:40.131470] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:51.147 [2024-07-25 14:13:40.131751] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:51.404 00:30:51.404 Latency(us) 00:30:51.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.404 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:51.404 raid_bdev1 : 11.71 101.39 304.18 0.00 0.00 13519.12 290.44 117249.86 00:30:51.404 =================================================================================================================== 00:30:51.404 Total : 101.39 304.18 0.00 0.00 13519.12 290.44 117249.86 00:30:51.404 [2024-07-25 14:13:40.218809] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.404 [2024-07-25 14:13:40.219000] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:51.405 0 00:30:51.405 [2024-07-25 14:13:40.219199] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:51.405 [2024-07-25 14:13:40.219219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:30:51.405 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:51.405 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.663 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:51.921 /dev/nbd0 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.921 1+0 records in 00:30:51.921 1+0 records out 00:30:51.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495738 s, 8.3 MB/s 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.921 14:13:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:30:52.179 /dev/nbd1 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:52.179 1+0 records in 00:30:52.179 1+0 records out 00:30:52.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449733 s, 9.1 MB/s 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:52.179 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.438 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.696 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 145849 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 145849 ']' 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 145849 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145849 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145849' 00:30:52.969 killing process with pid 145849 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 145849 00:30:52.969 Received shutdown signal, test time was about 13.450839 seconds 00:30:52.969 00:30:52.969 Latency(us) 00:30:52.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.969 =================================================================================================================== 00:30:52.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.969 14:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 145849 00:30:52.969 [2024-07-25 14:13:41.943565] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:53.227 [2024-07-25 14:13:42.127074] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:54.600 ************************************ 00:30:54.600 END TEST raid_rebuild_test_io 00:30:54.600 ************************************ 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:54.600 00:30:54.600 real 0m19.682s 00:30:54.600 user 0m30.923s 00:30:54.600 sys 0m2.254s 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:54.600 14:13:43 bdev_raid -- bdev/bdev_raid.sh@1034 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:30:54.600 14:13:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:54.600 14:13:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.600 14:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:54.600 ************************************ 00:30:54.600 START TEST raid_rebuild_test_sb_io 00:30:54.600 ************************************ 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=146334 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 146334 /var/tmp/spdk-raid.sock 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 146334 ']' 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:54.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:54.600 14:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:54.600 [2024-07-25 14:13:43.447530] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:30:54.600 [2024-07-25 14:13:43.447914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146334 ] 00:30:54.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:54.600 Zero copy mechanism will not be used. 00:30:54.600 [2024-07-25 14:13:43.604194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.857 [2024-07-25 14:13:43.817839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.114 [2024-07-25 14:13:44.013985] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:55.679 14:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.679 14:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:30:55.679 14:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:55.679 14:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:55.938 BaseBdev1_malloc 00:30:55.938 14:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:56.195 [2024-07-25 14:13:45.031582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:56.195 [2024-07-25 14:13:45.031954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.195 [2024-07-25 14:13:45.032157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:56.195 [2024-07-25 14:13:45.032319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.195 [2024-07-25 14:13:45.035061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.195 [2024-07-25 14:13:45.035244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:56.195 BaseBdev1 00:30:56.195 14:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:56.195 14:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:56.453 BaseBdev2_malloc 00:30:56.453 14:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:56.711 [2024-07-25 14:13:45.575981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:56.711 [2024-07-25 14:13:45.576293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.711 [2024-07-25 14:13:45.576382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:56.711 [2024-07-25 14:13:45.576613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.711 [2024-07-25 14:13:45.579282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.711 [2024-07-25 14:13:45.579470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:56.711 BaseBdev2 00:30:56.711 14:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:56.969 spare_malloc 00:30:56.969 14:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:57.227 spare_delay 00:30:57.227 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:57.485 [2024-07-25 14:13:46.496052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:57.485 [2024-07-25 14:13:46.496507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.485 [2024-07-25 14:13:46.496673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:57.485 [2024-07-25 14:13:46.496812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.485 [2024-07-25 14:13:46.499570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.485 [2024-07-25 14:13:46.499763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:57.485 spare 00:30:57.485 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:30:58.081 [2024-07-25 14:13:46.788233] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:58.081 [2024-07-25 14:13:46.790732] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:58.081 [2024-07-25 14:13:46.791085] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:30:58.081 [2024-07-25 14:13:46.791223] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:58.081 [2024-07-25 14:13:46.791407] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:58.081 [2024-07-25 14:13:46.791927] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:30:58.081 [2024-07-25 14:13:46.792054] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:30:58.081 [2024-07-25 14:13:46.792401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.081 14:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.081 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.081 "name": "raid_bdev1", 00:30:58.081 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:30:58.081 "strip_size_kb": 0, 00:30:58.081 "state": "online", 00:30:58.081 "raid_level": "raid1", 00:30:58.081 "superblock": true, 00:30:58.081 "num_base_bdevs": 2, 00:30:58.081 "num_base_bdevs_discovered": 2, 00:30:58.081 "num_base_bdevs_operational": 2, 00:30:58.081 "base_bdevs_list": [ 00:30:58.081 { 00:30:58.081 "name": "BaseBdev1", 00:30:58.081 "uuid": "ed081007-1ff6-580e-92e1-189b82cfd8e7", 00:30:58.081 "is_configured": true, 00:30:58.081 "data_offset": 2048, 00:30:58.081 "data_size": 63488 00:30:58.081 }, 00:30:58.081 { 00:30:58.081 "name": "BaseBdev2", 00:30:58.081 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:30:58.081 "is_configured": true, 00:30:58.082 "data_offset": 2048, 00:30:58.082 "data_size": 63488 00:30:58.082 } 00:30:58.082 ] 00:30:58.082 }' 00:30:58.082 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.082 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.014 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:59.014 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:59.014 [2024-07-25 14:13:47.920952] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.014 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:30:59.014 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:59.014 14:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.273 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:30:59.273 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:59.273 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:59.273 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:59.531 [2024-07-25 14:13:48.377045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:59.531 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:59.531 Zero copy mechanism will not be used. 00:30:59.531 Running I/O for 60 seconds... 00:30:59.531 [2024-07-25 14:13:48.491934] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:59.531 [2024-07-25 14:13:48.499334] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:30:59.531 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.532 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.097 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.097 "name": "raid_bdev1", 00:31:00.097 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:00.097 "strip_size_kb": 0, 00:31:00.097 "state": "online", 00:31:00.097 "raid_level": "raid1", 00:31:00.097 "superblock": true, 00:31:00.097 "num_base_bdevs": 2, 00:31:00.097 "num_base_bdevs_discovered": 1, 00:31:00.098 "num_base_bdevs_operational": 1, 00:31:00.098 "base_bdevs_list": [ 00:31:00.098 { 00:31:00.098 "name": null, 00:31:00.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.098 "is_configured": false, 00:31:00.098 "data_offset": 2048, 00:31:00.098 "data_size": 63488 00:31:00.098 }, 00:31:00.098 { 00:31:00.098 "name": "BaseBdev2", 00:31:00.098 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:00.098 "is_configured": true, 00:31:00.098 "data_offset": 2048, 00:31:00.098 "data_size": 63488 00:31:00.098 } 00:31:00.098 ] 00:31:00.098 }' 00:31:00.098 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.098 14:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.662 14:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:00.920 [2024-07-25 14:13:49.817101] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:00.920 14:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:00.920 [2024-07-25 14:13:49.895431] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:00.920 [2024-07-25 14:13:49.897802] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:01.177 [2024-07-25 14:13:50.016648] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:01.177 [2024-07-25 14:13:50.017392] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:01.435 [2024-07-25 14:13:50.236747] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:01.435 [2024-07-25 14:13:50.237310] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:01.694 [2024-07-25 14:13:50.588443] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:01.694 [2024-07-25 14:13:50.715134] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.952 14:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.209 [2024-07-25 14:13:51.028393] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:02.209 [2024-07-25 14:13:51.029151] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:02.209 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:02.209 "name": "raid_bdev1", 00:31:02.209 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:02.209 "strip_size_kb": 0, 00:31:02.209 "state": "online", 00:31:02.209 "raid_level": "raid1", 00:31:02.209 "superblock": true, 00:31:02.209 "num_base_bdevs": 2, 00:31:02.209 "num_base_bdevs_discovered": 2, 00:31:02.209 "num_base_bdevs_operational": 2, 00:31:02.209 "process": { 00:31:02.209 "type": "rebuild", 00:31:02.209 "target": "spare", 00:31:02.209 "progress": { 00:31:02.209 "blocks": 14336, 00:31:02.209 "percent": 22 00:31:02.209 } 00:31:02.209 }, 00:31:02.209 "base_bdevs_list": [ 00:31:02.209 { 00:31:02.209 "name": "spare", 00:31:02.209 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:02.209 "is_configured": true, 00:31:02.209 "data_offset": 2048, 00:31:02.209 "data_size": 63488 00:31:02.209 }, 00:31:02.209 { 00:31:02.209 "name": "BaseBdev2", 00:31:02.209 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:02.209 "is_configured": true, 00:31:02.209 "data_offset": 2048, 00:31:02.209 "data_size": 63488 00:31:02.209 } 00:31:02.209 ] 00:31:02.209 }' 00:31:02.209 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:02.210 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.210 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:02.476 [2024-07-25 14:13:51.275081] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:02.476 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:02.477 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:02.753 [2024-07-25 14:13:51.570673] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.753 [2024-07-25 14:13:51.632725] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:02.753 [2024-07-25 14:13:51.649900] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:02.753 [2024-07-25 14:13:51.652259] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:02.753 [2024-07-25 14:13:51.652415] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.753 [2024-07-25 14:13:51.652463] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:02.753 [2024-07-25 14:13:51.677781] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.753 14:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.011 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.011 "name": "raid_bdev1", 00:31:03.011 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:03.011 "strip_size_kb": 0, 00:31:03.011 "state": "online", 00:31:03.011 "raid_level": "raid1", 00:31:03.011 "superblock": true, 00:31:03.011 "num_base_bdevs": 2, 00:31:03.011 "num_base_bdevs_discovered": 1, 00:31:03.011 "num_base_bdevs_operational": 1, 00:31:03.011 "base_bdevs_list": [ 00:31:03.011 { 00:31:03.011 "name": null, 00:31:03.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.011 "is_configured": false, 00:31:03.011 "data_offset": 2048, 00:31:03.011 "data_size": 63488 00:31:03.011 }, 00:31:03.011 { 00:31:03.011 "name": "BaseBdev2", 00:31:03.011 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:03.011 "is_configured": true, 00:31:03.011 "data_offset": 2048, 00:31:03.011 "data_size": 63488 00:31:03.011 } 00:31:03.011 ] 00:31:03.011 }' 00:31:03.011 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.011 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.943 14:13:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.201 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.201 "name": "raid_bdev1", 00:31:04.201 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:04.201 "strip_size_kb": 0, 00:31:04.201 "state": "online", 00:31:04.201 "raid_level": "raid1", 00:31:04.201 "superblock": true, 00:31:04.201 "num_base_bdevs": 2, 00:31:04.201 "num_base_bdevs_discovered": 1, 00:31:04.201 "num_base_bdevs_operational": 1, 00:31:04.201 "base_bdevs_list": [ 00:31:04.201 { 00:31:04.201 "name": null, 00:31:04.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.201 "is_configured": false, 00:31:04.201 "data_offset": 2048, 00:31:04.201 "data_size": 63488 00:31:04.201 }, 00:31:04.201 { 00:31:04.201 "name": "BaseBdev2", 00:31:04.201 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:04.201 "is_configured": true, 00:31:04.201 "data_offset": 2048, 00:31:04.201 "data_size": 63488 00:31:04.201 } 00:31:04.201 ] 00:31:04.201 }' 00:31:04.201 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.201 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:04.201 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.202 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.202 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:04.767 [2024-07-25 14:13:53.569052] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:04.767 14:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:04.767 [2024-07-25 14:13:53.651758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:04.767 [2024-07-25 14:13:53.654058] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:04.767 [2024-07-25 14:13:53.764590] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:04.767 [2024-07-25 14:13:53.765343] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:05.024 [2024-07-25 14:13:53.993290] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:05.024 [2024-07-25 14:13:53.993893] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:05.591 [2024-07-25 14:13:54.341070] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:05.591 [2024-07-25 14:13:54.341876] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:05.591 [2024-07-25 14:13:54.551677] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:05.591 [2024-07-25 14:13:54.552213] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.849 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.849 [2024-07-25 14:13:54.890293] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:06.107 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.107 "name": "raid_bdev1", 00:31:06.107 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:06.107 "strip_size_kb": 0, 00:31:06.107 "state": "online", 00:31:06.107 "raid_level": "raid1", 00:31:06.107 "superblock": true, 00:31:06.107 "num_base_bdevs": 2, 00:31:06.107 "num_base_bdevs_discovered": 2, 00:31:06.107 "num_base_bdevs_operational": 2, 00:31:06.107 "process": { 00:31:06.107 "type": "rebuild", 00:31:06.107 "target": "spare", 00:31:06.107 "progress": { 00:31:06.107 "blocks": 14336, 00:31:06.107 "percent": 22 00:31:06.107 } 00:31:06.107 }, 00:31:06.107 "base_bdevs_list": [ 00:31:06.107 { 00:31:06.107 "name": "spare", 00:31:06.107 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:06.107 "is_configured": true, 00:31:06.107 "data_offset": 2048, 00:31:06.107 "data_size": 63488 00:31:06.107 }, 00:31:06.107 { 00:31:06.107 "name": "BaseBdev2", 00:31:06.107 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:06.107 "is_configured": true, 00:31:06.107 "data_offset": 2048, 00:31:06.107 "data_size": 63488 00:31:06.107 } 00:31:06.107 ] 00:31:06.107 }' 00:31:06.107 14:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.107 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:06.107 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:31:06.108 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=1001 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.108 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.108 [2024-07-25 14:13:55.127438] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:06.365 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.365 "name": "raid_bdev1", 00:31:06.365 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:06.365 "strip_size_kb": 0, 00:31:06.365 "state": "online", 00:31:06.365 "raid_level": "raid1", 00:31:06.365 "superblock": true, 00:31:06.365 "num_base_bdevs": 2, 00:31:06.365 "num_base_bdevs_discovered": 2, 00:31:06.365 "num_base_bdevs_operational": 2, 00:31:06.365 "process": { 00:31:06.365 "type": "rebuild", 00:31:06.365 "target": "spare", 00:31:06.365 "progress": { 00:31:06.365 "blocks": 16384, 00:31:06.365 "percent": 25 00:31:06.365 } 00:31:06.365 }, 00:31:06.365 "base_bdevs_list": [ 00:31:06.365 { 00:31:06.366 "name": "spare", 00:31:06.366 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:06.366 "is_configured": true, 00:31:06.366 "data_offset": 2048, 00:31:06.366 "data_size": 63488 00:31:06.366 }, 00:31:06.366 { 00:31:06.366 "name": "BaseBdev2", 00:31:06.366 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:06.366 "is_configured": true, 00:31:06.366 "data_offset": 2048, 00:31:06.366 "data_size": 63488 00:31:06.366 } 00:31:06.366 ] 00:31:06.366 }' 00:31:06.366 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.624 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:06.624 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.624 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:06.624 14:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:06.882 [2024-07-25 14:13:55.877928] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:07.140 [2024-07-25 14:13:56.097833] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:07.140 [2024-07-25 14:13:56.098366] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:07.397 [2024-07-25 14:13:56.350664] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:07.397 [2024-07-25 14:13:56.351516] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.655 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.913 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:07.913 "name": "raid_bdev1", 00:31:07.913 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:07.913 "strip_size_kb": 0, 00:31:07.913 "state": "online", 00:31:07.913 "raid_level": "raid1", 00:31:07.913 "superblock": true, 00:31:07.913 "num_base_bdevs": 2, 00:31:07.913 "num_base_bdevs_discovered": 2, 00:31:07.913 "num_base_bdevs_operational": 2, 00:31:07.913 "process": { 00:31:07.913 "type": "rebuild", 00:31:07.913 "target": "spare", 00:31:07.913 "progress": { 00:31:07.913 "blocks": 36864, 00:31:07.913 "percent": 58 00:31:07.913 } 00:31:07.913 }, 00:31:07.913 "base_bdevs_list": [ 00:31:07.913 { 00:31:07.913 "name": "spare", 00:31:07.913 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:07.913 "is_configured": true, 00:31:07.913 "data_offset": 2048, 00:31:07.913 "data_size": 63488 00:31:07.913 }, 00:31:07.913 { 00:31:07.913 "name": "BaseBdev2", 00:31:07.913 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:07.913 "is_configured": true, 00:31:07.913 "data_offset": 2048, 00:31:07.913 "data_size": 63488 00:31:07.913 } 00:31:07.913 ] 00:31:07.913 }' 00:31:07.913 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:07.914 [2024-07-25 14:13:56.803033] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:31:07.914 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:07.914 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:07.914 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:07.914 14:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:08.172 [2024-07-25 14:13:57.013597] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:31:08.172 [2024-07-25 14:13:57.014133] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:31:08.430 [2024-07-25 14:13:57.380725] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:31:08.688 [2024-07-25 14:13:57.721928] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.946 14:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.204 [2024-07-25 14:13:58.169366] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:31:09.204 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:09.204 "name": "raid_bdev1", 00:31:09.204 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:09.204 "strip_size_kb": 0, 00:31:09.204 "state": "online", 00:31:09.204 "raid_level": "raid1", 00:31:09.204 "superblock": true, 00:31:09.204 "num_base_bdevs": 2, 00:31:09.204 "num_base_bdevs_discovered": 2, 00:31:09.204 "num_base_bdevs_operational": 2, 00:31:09.204 "process": { 00:31:09.204 "type": "rebuild", 00:31:09.204 "target": "spare", 00:31:09.204 "progress": { 00:31:09.204 "blocks": 57344, 00:31:09.204 "percent": 90 00:31:09.204 } 00:31:09.204 }, 00:31:09.204 "base_bdevs_list": [ 00:31:09.204 { 00:31:09.204 "name": "spare", 00:31:09.204 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:09.204 "is_configured": true, 00:31:09.204 "data_offset": 2048, 00:31:09.204 "data_size": 63488 00:31:09.204 }, 00:31:09.204 { 00:31:09.204 "name": "BaseBdev2", 00:31:09.204 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:09.204 "is_configured": true, 00:31:09.204 "data_offset": 2048, 00:31:09.204 "data_size": 63488 00:31:09.204 } 00:31:09.204 ] 00:31:09.204 }' 00:31:09.204 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:09.204 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:09.204 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:09.462 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:09.462 14:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:09.719 [2024-07-25 14:13:58.514692] bdev_raid.c:2894:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:09.719 [2024-07-25 14:13:58.622521] bdev_raid.c:2556:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:09.719 [2024-07-25 14:13:58.624976] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.316 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.594 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:10.594 "name": "raid_bdev1", 00:31:10.594 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:10.594 "strip_size_kb": 0, 00:31:10.594 "state": "online", 00:31:10.594 "raid_level": "raid1", 00:31:10.594 "superblock": true, 00:31:10.594 "num_base_bdevs": 2, 00:31:10.594 "num_base_bdevs_discovered": 2, 00:31:10.594 "num_base_bdevs_operational": 2, 00:31:10.594 "base_bdevs_list": [ 00:31:10.594 { 00:31:10.594 "name": "spare", 00:31:10.594 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:10.594 "is_configured": true, 00:31:10.594 "data_offset": 2048, 00:31:10.594 "data_size": 63488 00:31:10.594 }, 00:31:10.594 { 00:31:10.594 "name": "BaseBdev2", 00:31:10.594 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:10.594 "is_configured": true, 00:31:10.594 "data_offset": 2048, 00:31:10.594 "data_size": 63488 00:31:10.594 } 00:31:10.594 ] 00:31:10.594 }' 00:31:10.594 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.852 14:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.110 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:11.110 "name": "raid_bdev1", 00:31:11.110 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:11.110 "strip_size_kb": 0, 00:31:11.110 "state": "online", 00:31:11.110 "raid_level": "raid1", 00:31:11.110 "superblock": true, 00:31:11.110 "num_base_bdevs": 2, 00:31:11.110 "num_base_bdevs_discovered": 2, 00:31:11.110 "num_base_bdevs_operational": 2, 00:31:11.110 "base_bdevs_list": [ 00:31:11.110 { 00:31:11.110 "name": "spare", 00:31:11.110 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:11.110 "is_configured": true, 00:31:11.110 "data_offset": 2048, 00:31:11.110 "data_size": 63488 00:31:11.110 }, 00:31:11.110 { 00:31:11.110 "name": "BaseBdev2", 00:31:11.110 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:11.110 "is_configured": true, 00:31:11.110 "data_offset": 2048, 00:31:11.110 "data_size": 63488 00:31:11.110 } 00:31:11.110 ] 00:31:11.110 }' 00:31:11.110 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:11.110 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:11.110 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.368 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.625 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.625 "name": "raid_bdev1", 00:31:11.625 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:11.625 "strip_size_kb": 0, 00:31:11.625 "state": "online", 00:31:11.625 "raid_level": "raid1", 00:31:11.625 "superblock": true, 00:31:11.625 "num_base_bdevs": 2, 00:31:11.625 "num_base_bdevs_discovered": 2, 00:31:11.625 "num_base_bdevs_operational": 2, 00:31:11.625 "base_bdevs_list": [ 00:31:11.625 { 00:31:11.625 "name": "spare", 00:31:11.625 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:11.625 "is_configured": true, 00:31:11.625 "data_offset": 2048, 00:31:11.625 "data_size": 63488 00:31:11.625 }, 00:31:11.625 { 00:31:11.625 "name": "BaseBdev2", 00:31:11.625 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:11.625 "is_configured": true, 00:31:11.625 "data_offset": 2048, 00:31:11.625 "data_size": 63488 00:31:11.625 } 00:31:11.625 ] 00:31:11.625 }' 00:31:11.625 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.625 14:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.191 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:12.449 [2024-07-25 14:14:01.360143] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:12.449 [2024-07-25 14:14:01.360341] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:12.449 00:31:12.449 Latency(us) 00:31:12.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.449 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:12.449 raid_bdev1 : 13.08 96.18 288.53 0.00 0.00 13940.61 325.82 116773.24 00:31:12.449 =================================================================================================================== 00:31:12.449 Total : 96.18 288.53 0.00 0.00 13940.61 325.82 116773.24 00:31:12.449 [2024-07-25 14:14:01.479723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:12.449 [2024-07-25 14:14:01.479934] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:12.449 0 00:31:12.449 [2024-07-25 14:14:01.480151] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:12.449 [2024-07-25 14:14:01.480171] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:31:12.707 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.707 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.964 14:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:13.222 /dev/nbd0 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:13.222 1+0 records in 00:31:13.222 1+0 records out 00:31:13.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433036 s, 9.5 MB/s 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:13.222 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:13.479 /dev/nbd1 00:31:13.479 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:13.480 1+0 records in 00:31:13.480 1+0 records out 00:31:13.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748269 s, 5.5 MB/s 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:13.480 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:13.737 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:13.995 14:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:31:14.253 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:14.509 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:14.766 [2024-07-25 14:14:03.675336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:14.766 [2024-07-25 14:14:03.675647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:14.766 [2024-07-25 14:14:03.675751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:14.766 [2024-07-25 14:14:03.675994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:14.766 [2024-07-25 14:14:03.678835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:14.766 [2024-07-25 14:14:03.679020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:14.766 [2024-07-25 14:14:03.679283] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:14.766 [2024-07-25 14:14:03.679491] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:14.766 [2024-07-25 14:14:03.679897] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:14.766 spare 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.766 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.766 [2024-07-25 14:14:03.780153] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:31:14.766 [2024-07-25 14:14:03.780331] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:14.766 [2024-07-25 14:14:03.780555] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:31:14.766 [2024-07-25 14:14:03.781098] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:31:14.766 [2024-07-25 14:14:03.781265] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:31:14.766 [2024-07-25 14:14:03.781601] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.023 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.023 "name": "raid_bdev1", 00:31:15.023 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:15.023 "strip_size_kb": 0, 00:31:15.023 "state": "online", 00:31:15.023 "raid_level": "raid1", 00:31:15.023 "superblock": true, 00:31:15.023 "num_base_bdevs": 2, 00:31:15.023 "num_base_bdevs_discovered": 2, 00:31:15.023 "num_base_bdevs_operational": 2, 00:31:15.023 "base_bdevs_list": [ 00:31:15.023 { 00:31:15.023 "name": "spare", 00:31:15.023 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:15.023 "is_configured": true, 00:31:15.023 "data_offset": 2048, 00:31:15.023 "data_size": 63488 00:31:15.023 }, 00:31:15.023 { 00:31:15.023 "name": "BaseBdev2", 00:31:15.023 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:15.023 "is_configured": true, 00:31:15.023 "data_offset": 2048, 00:31:15.023 "data_size": 63488 00:31:15.023 } 00:31:15.023 ] 00:31:15.023 }' 00:31:15.023 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.023 14:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.954 "name": "raid_bdev1", 00:31:15.954 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:15.954 "strip_size_kb": 0, 00:31:15.954 "state": "online", 00:31:15.954 "raid_level": "raid1", 00:31:15.954 "superblock": true, 00:31:15.954 "num_base_bdevs": 2, 00:31:15.954 "num_base_bdevs_discovered": 2, 00:31:15.954 "num_base_bdevs_operational": 2, 00:31:15.954 "base_bdevs_list": [ 00:31:15.954 { 00:31:15.954 "name": "spare", 00:31:15.954 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:15.954 "is_configured": true, 00:31:15.954 "data_offset": 2048, 00:31:15.954 "data_size": 63488 00:31:15.954 }, 00:31:15.954 { 00:31:15.954 "name": "BaseBdev2", 00:31:15.954 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:15.954 "is_configured": true, 00:31:15.954 "data_offset": 2048, 00:31:15.954 "data_size": 63488 00:31:15.954 } 00:31:15.954 ] 00:31:15.954 }' 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:15.954 14:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:16.212 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:16.212 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:16.212 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.469 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:31:16.469 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:16.726 [2024-07-25 14:14:05.544169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.726 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.984 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.984 "name": "raid_bdev1", 00:31:16.984 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:16.984 "strip_size_kb": 0, 00:31:16.984 "state": "online", 00:31:16.984 "raid_level": "raid1", 00:31:16.984 "superblock": true, 00:31:16.984 "num_base_bdevs": 2, 00:31:16.984 "num_base_bdevs_discovered": 1, 00:31:16.984 "num_base_bdevs_operational": 1, 00:31:16.984 "base_bdevs_list": [ 00:31:16.984 { 00:31:16.984 "name": null, 00:31:16.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.984 "is_configured": false, 00:31:16.984 "data_offset": 2048, 00:31:16.984 "data_size": 63488 00:31:16.984 }, 00:31:16.984 { 00:31:16.984 "name": "BaseBdev2", 00:31:16.984 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:16.984 "is_configured": true, 00:31:16.984 "data_offset": 2048, 00:31:16.984 "data_size": 63488 00:31:16.984 } 00:31:16.984 ] 00:31:16.984 }' 00:31:16.984 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.984 14:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:17.549 14:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:17.807 [2024-07-25 14:14:06.708645] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:17.807 [2024-07-25 14:14:06.708892] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:17.807 [2024-07-25 14:14:06.708910] bdev_raid.c:3816:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:17.807 [2024-07-25 14:14:06.708976] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:17.807 [2024-07-25 14:14:06.723605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:31:17.807 [2024-07-25 14:14:06.725861] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:17.807 14:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.784 14:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.042 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:19.042 "name": "raid_bdev1", 00:31:19.042 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:19.042 "strip_size_kb": 0, 00:31:19.042 "state": "online", 00:31:19.042 "raid_level": "raid1", 00:31:19.042 "superblock": true, 00:31:19.042 "num_base_bdevs": 2, 00:31:19.042 "num_base_bdevs_discovered": 2, 00:31:19.042 "num_base_bdevs_operational": 2, 00:31:19.042 "process": { 00:31:19.042 "type": "rebuild", 00:31:19.042 "target": "spare", 00:31:19.042 "progress": { 00:31:19.042 "blocks": 24576, 00:31:19.042 "percent": 38 00:31:19.042 } 00:31:19.042 }, 00:31:19.042 "base_bdevs_list": [ 00:31:19.042 { 00:31:19.042 "name": "spare", 00:31:19.042 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:19.042 "is_configured": true, 00:31:19.042 "data_offset": 2048, 00:31:19.042 "data_size": 63488 00:31:19.042 }, 00:31:19.042 { 00:31:19.042 "name": "BaseBdev2", 00:31:19.042 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:19.042 "is_configured": true, 00:31:19.042 "data_offset": 2048, 00:31:19.042 "data_size": 63488 00:31:19.042 } 00:31:19.042 ] 00:31:19.042 }' 00:31:19.042 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:19.042 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:19.042 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:19.301 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:19.301 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:19.560 [2024-07-25 14:14:08.428126] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:19.560 [2024-07-25 14:14:08.437131] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:19.560 [2024-07-25 14:14:08.437221] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.560 [2024-07-25 14:14:08.437243] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:19.560 [2024-07-25 14:14:08.437252] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.560 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.818 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.818 "name": "raid_bdev1", 00:31:19.818 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:19.818 "strip_size_kb": 0, 00:31:19.818 "state": "online", 00:31:19.818 "raid_level": "raid1", 00:31:19.818 "superblock": true, 00:31:19.818 "num_base_bdevs": 2, 00:31:19.818 "num_base_bdevs_discovered": 1, 00:31:19.818 "num_base_bdevs_operational": 1, 00:31:19.818 "base_bdevs_list": [ 00:31:19.818 { 00:31:19.818 "name": null, 00:31:19.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.818 "is_configured": false, 00:31:19.818 "data_offset": 2048, 00:31:19.818 "data_size": 63488 00:31:19.818 }, 00:31:19.818 { 00:31:19.818 "name": "BaseBdev2", 00:31:19.818 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:19.818 "is_configured": true, 00:31:19.818 "data_offset": 2048, 00:31:19.818 "data_size": 63488 00:31:19.818 } 00:31:19.818 ] 00:31:19.818 }' 00:31:19.818 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.818 14:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:20.384 14:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:20.642 [2024-07-25 14:14:09.657454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:20.642 [2024-07-25 14:14:09.657554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.642 [2024-07-25 14:14:09.657593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:31:20.642 [2024-07-25 14:14:09.657622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.642 [2024-07-25 14:14:09.658245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.642 [2024-07-25 14:14:09.658303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:20.642 [2024-07-25 14:14:09.658430] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:20.642 [2024-07-25 14:14:09.658446] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:20.642 [2024-07-25 14:14:09.658455] bdev_raid.c:3816:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:20.642 [2024-07-25 14:14:09.658509] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:20.642 [2024-07-25 14:14:09.673068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:31:20.642 spare 00:31:20.642 [2024-07-25 14:14:09.675254] bdev_raid.c:2929:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:20.899 14:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.832 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.090 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.090 "name": "raid_bdev1", 00:31:22.090 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:22.090 "strip_size_kb": 0, 00:31:22.090 "state": "online", 00:31:22.090 "raid_level": "raid1", 00:31:22.090 "superblock": true, 00:31:22.090 "num_base_bdevs": 2, 00:31:22.090 "num_base_bdevs_discovered": 2, 00:31:22.090 "num_base_bdevs_operational": 2, 00:31:22.090 "process": { 00:31:22.090 "type": "rebuild", 00:31:22.090 "target": "spare", 00:31:22.090 "progress": { 00:31:22.090 "blocks": 24576, 00:31:22.090 "percent": 38 00:31:22.090 } 00:31:22.090 }, 00:31:22.090 "base_bdevs_list": [ 00:31:22.090 { 00:31:22.090 "name": "spare", 00:31:22.090 "uuid": "e7277a43-6620-594b-bf63-55a13ba6b46c", 00:31:22.090 "is_configured": true, 00:31:22.090 "data_offset": 2048, 00:31:22.090 "data_size": 63488 00:31:22.090 }, 00:31:22.090 { 00:31:22.090 "name": "BaseBdev2", 00:31:22.090 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:22.090 "is_configured": true, 00:31:22.090 "data_offset": 2048, 00:31:22.090 "data_size": 63488 00:31:22.090 } 00:31:22.090 ] 00:31:22.090 }' 00:31:22.090 14:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:22.090 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:22.090 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:22.090 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:22.090 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:22.348 [2024-07-25 14:14:11.321566] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.348 [2024-07-25 14:14:11.385525] bdev_raid.c:2565:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:22.348 [2024-07-25 14:14:11.385632] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.348 [2024-07-25 14:14:11.385654] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.348 [2024-07-25 14:14:11.385663] bdev_raid.c:2503:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.607 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.865 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.865 "name": "raid_bdev1", 00:31:22.865 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:22.865 "strip_size_kb": 0, 00:31:22.865 "state": "online", 00:31:22.865 "raid_level": "raid1", 00:31:22.865 "superblock": true, 00:31:22.865 "num_base_bdevs": 2, 00:31:22.865 "num_base_bdevs_discovered": 1, 00:31:22.865 "num_base_bdevs_operational": 1, 00:31:22.865 "base_bdevs_list": [ 00:31:22.865 { 00:31:22.865 "name": null, 00:31:22.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.865 "is_configured": false, 00:31:22.865 "data_offset": 2048, 00:31:22.865 "data_size": 63488 00:31:22.865 }, 00:31:22.865 { 00:31:22.865 "name": "BaseBdev2", 00:31:22.865 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:22.865 "is_configured": true, 00:31:22.865 "data_offset": 2048, 00:31:22.865 "data_size": 63488 00:31:22.865 } 00:31:22.865 ] 00:31:22.865 }' 00:31:22.865 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.865 14:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.431 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.689 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.689 "name": "raid_bdev1", 00:31:23.689 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:23.689 "strip_size_kb": 0, 00:31:23.689 "state": "online", 00:31:23.689 "raid_level": "raid1", 00:31:23.689 "superblock": true, 00:31:23.689 "num_base_bdevs": 2, 00:31:23.689 "num_base_bdevs_discovered": 1, 00:31:23.689 "num_base_bdevs_operational": 1, 00:31:23.689 "base_bdevs_list": [ 00:31:23.689 { 00:31:23.689 "name": null, 00:31:23.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.689 "is_configured": false, 00:31:23.689 "data_offset": 2048, 00:31:23.689 "data_size": 63488 00:31:23.689 }, 00:31:23.689 { 00:31:23.689 "name": "BaseBdev2", 00:31:23.689 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:23.689 "is_configured": true, 00:31:23.689 "data_offset": 2048, 00:31:23.689 "data_size": 63488 00:31:23.689 } 00:31:23.689 ] 00:31:23.689 }' 00:31:23.689 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.689 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:23.689 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.947 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.947 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:23.947 14:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:24.205 [2024-07-25 14:14:13.201995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:24.205 [2024-07-25 14:14:13.202112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.205 [2024-07-25 14:14:13.202182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:24.205 [2024-07-25 14:14:13.202216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.205 [2024-07-25 14:14:13.202850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.205 [2024-07-25 14:14:13.202903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:24.205 [2024-07-25 14:14:13.203063] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:24.205 [2024-07-25 14:14:13.203082] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:24.205 [2024-07-25 14:14:13.203090] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:24.205 BaseBdev1 00:31:24.205 14:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.624 "name": "raid_bdev1", 00:31:25.624 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:25.624 "strip_size_kb": 0, 00:31:25.624 "state": "online", 00:31:25.624 "raid_level": "raid1", 00:31:25.624 "superblock": true, 00:31:25.624 "num_base_bdevs": 2, 00:31:25.624 "num_base_bdevs_discovered": 1, 00:31:25.624 "num_base_bdevs_operational": 1, 00:31:25.624 "base_bdevs_list": [ 00:31:25.624 { 00:31:25.624 "name": null, 00:31:25.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.624 "is_configured": false, 00:31:25.624 "data_offset": 2048, 00:31:25.624 "data_size": 63488 00:31:25.624 }, 00:31:25.624 { 00:31:25.624 "name": "BaseBdev2", 00:31:25.624 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:25.624 "is_configured": true, 00:31:25.624 "data_offset": 2048, 00:31:25.624 "data_size": 63488 00:31:25.624 } 00:31:25.624 ] 00:31:25.624 }' 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.624 14:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.191 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:26.755 "name": "raid_bdev1", 00:31:26.755 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:26.755 "strip_size_kb": 0, 00:31:26.755 "state": "online", 00:31:26.755 "raid_level": "raid1", 00:31:26.755 "superblock": true, 00:31:26.755 "num_base_bdevs": 2, 00:31:26.755 "num_base_bdevs_discovered": 1, 00:31:26.755 "num_base_bdevs_operational": 1, 00:31:26.755 "base_bdevs_list": [ 00:31:26.755 { 00:31:26.755 "name": null, 00:31:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.755 "is_configured": false, 00:31:26.755 "data_offset": 2048, 00:31:26.755 "data_size": 63488 00:31:26.755 }, 00:31:26.755 { 00:31:26.755 "name": "BaseBdev2", 00:31:26.755 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:26.755 "is_configured": true, 00:31:26.755 "data_offset": 2048, 00:31:26.755 "data_size": 63488 00:31:26.755 } 00:31:26.755 ] 00:31:26.755 }' 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:26.755 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:27.013 [2024-07-25 14:14:15.887413] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:27.013 [2024-07-25 14:14:15.887681] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:27.013 [2024-07-25 14:14:15.887698] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:27.013 request: 00:31:27.013 { 00:31:27.013 "base_bdev": "BaseBdev1", 00:31:27.013 "raid_bdev": "raid_bdev1", 00:31:27.013 "skip_rebuild": false, 00:31:27.013 "method": "bdev_raid_add_base_bdev", 00:31:27.013 "req_id": 1 00:31:27.013 } 00:31:27.013 Got JSON-RPC error response 00:31:27.013 response: 00:31:27.013 { 00:31:27.013 "code": -22, 00:31:27.013 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:27.013 } 00:31:27.013 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:31:27.013 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:27.013 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:27.013 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:27.013 14:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.945 14:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.203 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.203 "name": "raid_bdev1", 00:31:28.203 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:28.203 "strip_size_kb": 0, 00:31:28.203 "state": "online", 00:31:28.203 "raid_level": "raid1", 00:31:28.203 "superblock": true, 00:31:28.203 "num_base_bdevs": 2, 00:31:28.203 "num_base_bdevs_discovered": 1, 00:31:28.203 "num_base_bdevs_operational": 1, 00:31:28.203 "base_bdevs_list": [ 00:31:28.203 { 00:31:28.203 "name": null, 00:31:28.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.203 "is_configured": false, 00:31:28.203 "data_offset": 2048, 00:31:28.203 "data_size": 63488 00:31:28.203 }, 00:31:28.203 { 00:31:28.203 "name": "BaseBdev2", 00:31:28.203 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:28.203 "is_configured": true, 00:31:28.203 "data_offset": 2048, 00:31:28.203 "data_size": 63488 00:31:28.203 } 00:31:28.203 ] 00:31:28.203 }' 00:31:28.203 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.203 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.136 14:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.136 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:29.136 "name": "raid_bdev1", 00:31:29.136 "uuid": "240ae5e2-0f59-45b2-b7a0-44821a3621ed", 00:31:29.136 "strip_size_kb": 0, 00:31:29.137 "state": "online", 00:31:29.137 "raid_level": "raid1", 00:31:29.137 "superblock": true, 00:31:29.137 "num_base_bdevs": 2, 00:31:29.137 "num_base_bdevs_discovered": 1, 00:31:29.137 "num_base_bdevs_operational": 1, 00:31:29.137 "base_bdevs_list": [ 00:31:29.137 { 00:31:29.137 "name": null, 00:31:29.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.137 "is_configured": false, 00:31:29.137 "data_offset": 2048, 00:31:29.137 "data_size": 63488 00:31:29.137 }, 00:31:29.137 { 00:31:29.137 "name": "BaseBdev2", 00:31:29.137 "uuid": "894d537f-b5bc-59dc-9a51-672363c66a7b", 00:31:29.137 "is_configured": true, 00:31:29.137 "data_offset": 2048, 00:31:29.137 "data_size": 63488 00:31:29.137 } 00:31:29.137 ] 00:31:29.137 }' 00:31:29.137 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 146334 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 146334 ']' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 146334 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 146334 00:31:29.394 killing process with pid 146334 00:31:29.394 Received shutdown signal, test time was about 29.903157 seconds 00:31:29.394 00:31:29.394 Latency(us) 00:31:29.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.394 =================================================================================================================== 00:31:29.394 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 146334' 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 146334 00:31:29.394 14:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 146334 00:31:29.394 [2024-07-25 14:14:18.283178] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:29.394 [2024-07-25 14:14:18.283325] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:29.394 [2024-07-25 14:14:18.283400] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:29.394 [2024-07-25 14:14:18.283419] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:31:29.651 [2024-07-25 14:14:18.467156] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:30.582 ************************************ 00:31:30.582 END TEST raid_rebuild_test_sb_io 00:31:30.582 ************************************ 00:31:30.582 14:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:31:30.582 00:31:30.582 real 0m36.248s 00:31:30.582 user 0m58.554s 00:31:30.582 sys 0m3.557s 00:31:30.582 14:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:30.582 14:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:30.841 14:14:19 bdev_raid -- bdev/bdev_raid.sh@1035 -- # run_test raid_add_bdev_without_rebuild raid_add_bdev_without_rebuild 2 false 00:31:30.841 14:14:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:30.841 14:14:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.841 14:14:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:30.841 ************************************ 00:31:30.841 START TEST raid_add_bdev_without_rebuild 00:31:30.841 ************************************ 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@1125 -- # raid_add_bdev_without_rebuild 2 false 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@806 -- # local superblock=false 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@809 -- # local strip_size=0 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@810 -- # local data_offset 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@813 -- # raid_pid=147242 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@814 -- # waitforlisten 147242 /var/tmp/spdk-raid.sock 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@812 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@831 -- # '[' -z 147242 ']' 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:30.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:30.841 14:14:19 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@10 -- # set +x 00:31:30.841 [2024-07-25 14:14:19.770711] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:31:30.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:30.841 Zero copy mechanism will not be used. 00:31:30.841 [2024-07-25 14:14:19.770975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147242 ] 00:31:31.099 [2024-07-25 14:14:19.945167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.357 [2024-07-25 14:14:20.197671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.357 [2024-07-25 14:14:20.396824] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.921 14:14:20 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:31.921 14:14:20 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@864 -- # return 0 00:31:31.921 14:14:20 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@817 -- # for bdev in "${base_bdevs[@]}" 00:31:31.921 14:14:20 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@818 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:32.179 BaseBdev1_malloc 00:31:32.179 14:14:20 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:32.179 [2024-07-25 14:14:21.207417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:32.179 [2024-07-25 14:14:21.207574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.179 [2024-07-25 14:14:21.207630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:32.179 [2024-07-25 14:14:21.207664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.179 [2024-07-25 14:14:21.210317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.180 [2024-07-25 14:14:21.210373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:32.180 BaseBdev1 00:31:32.448 14:14:21 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@817 -- # for bdev in "${base_bdevs[@]}" 00:31:32.448 14:14:21 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@818 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:32.723 BaseBdev2_malloc 00:31:32.723 14:14:21 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:32.982 [2024-07-25 14:14:21.788624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:32.982 [2024-07-25 14:14:21.788765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.982 [2024-07-25 14:14:21.788827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:32.982 [2024-07-25 14:14:21.788850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.982 [2024-07-25 14:14:21.791489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.982 [2024-07-25 14:14:21.791541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:32.982 BaseBdev2 00:31:32.982 14:14:21 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@823 -- # '[' false = true ']' 00:31:32.982 14:14:21 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:32.982 [2024-07-25 14:14:22.024707] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:33.239 [2024-07-25 14:14:22.027016] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:33.240 [2024-07-25 14:14:22.027161] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:31:33.240 [2024-07-25 14:14:22.027177] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:33.240 [2024-07-25 14:14:22.027337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:31:33.240 [2024-07-25 14:14:22.027810] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:31:33.240 [2024-07-25 14:14:22.027834] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:31:33.240 [2024-07-25 14:14:22.028061] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@824 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.240 "name": "raid_bdev1", 00:31:33.240 "uuid": "66219653-3a83-4a3f-8d72-231799ecedb9", 00:31:33.240 "strip_size_kb": 0, 00:31:33.240 "state": "online", 00:31:33.240 "raid_level": "raid1", 00:31:33.240 "superblock": false, 00:31:33.240 "num_base_bdevs": 2, 00:31:33.240 "num_base_bdevs_discovered": 2, 00:31:33.240 "num_base_bdevs_operational": 2, 00:31:33.240 "base_bdevs_list": [ 00:31:33.240 { 00:31:33.240 "name": "BaseBdev1", 00:31:33.240 "uuid": "8f05c237-c78e-5584-aad6-277452cad634", 00:31:33.240 "is_configured": true, 00:31:33.240 "data_offset": 0, 00:31:33.240 "data_size": 65536 00:31:33.240 }, 00:31:33.240 { 00:31:33.240 "name": "BaseBdev2", 00:31:33.240 "uuid": "a9861eee-8284-5dd6-b687-055577389f14", 00:31:33.240 "is_configured": true, 00:31:33.240 "data_offset": 0, 00:31:33.240 "data_size": 65536 00:31:33.240 } 00:31:33.240 ] 00:31:33.240 }' 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.240 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@10 -- # set +x 00:31:34.174 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.174 14:14:22 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@827 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:34.174 14:14:23 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@827 -- # data_offset=0 00:31:34.174 14:14:23 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:34.441 [2024-07-25 14:14:23.477071] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:34.699 14:14:23 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@833 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b BaseBdev1 -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:34.699 spare_delay 00:31:34.699 14:14:23 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@834 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:35.264 [2024-07-25 14:14:24.009187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:35.264 [2024-07-25 14:14:24.009309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:35.264 [2024-07-25 14:14:24.009356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:35.264 [2024-07-25 14:14:24.009388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:35.264 [2024-07-25 14:14:24.010026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:35.264 [2024-07-25 14:14:24.010077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:35.264 spare 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.264 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.523 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.523 "name": "raid_bdev1", 00:31:35.523 "uuid": "66219653-3a83-4a3f-8d72-231799ecedb9", 00:31:35.523 "strip_size_kb": 0, 00:31:35.523 "state": "online", 00:31:35.523 "raid_level": "raid1", 00:31:35.523 "superblock": false, 00:31:35.523 "num_base_bdevs": 2, 00:31:35.523 "num_base_bdevs_discovered": 1, 00:31:35.523 "num_base_bdevs_operational": 1, 00:31:35.523 "base_bdevs_list": [ 00:31:35.523 { 00:31:35.523 "name": null, 00:31:35.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.523 "is_configured": false, 00:31:35.523 "data_offset": 0, 00:31:35.523 "data_size": 65536 00:31:35.523 }, 00:31:35.523 { 00:31:35.523 "name": "BaseBdev2", 00:31:35.523 "uuid": "a9861eee-8284-5dd6-b687-055577389f14", 00:31:35.523 "is_configured": true, 00:31:35.523 "data_offset": 0, 00:31:35.523 "data_size": 65536 00:31:35.523 } 00:31:35.523 ] 00:31:35.523 }' 00:31:35.523 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.523 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@10 -- # set +x 00:31:36.090 14:14:24 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@840 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare -s 00:31:36.348 [2024-07-25 14:14:25.193507] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:36.348 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@841 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -t 1000 00:31:36.606 [2024-07-25 14:14:25.478623] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:36.606 [ 00:31:36.606 { 00:31:36.606 "name": "BaseBdev1_malloc", 00:31:36.606 "aliases": [ 00:31:36.606 "3ca0cc5d-a243-41e2-8580-b80651004a75" 00:31:36.606 ], 00:31:36.606 "product_name": "Malloc disk", 00:31:36.606 "block_size": 512, 00:31:36.606 "num_blocks": 65536, 00:31:36.606 "uuid": "3ca0cc5d-a243-41e2-8580-b80651004a75", 00:31:36.606 "assigned_rate_limits": { 00:31:36.606 "rw_ios_per_sec": 0, 00:31:36.606 "rw_mbytes_per_sec": 0, 00:31:36.606 "r_mbytes_per_sec": 0, 00:31:36.606 "w_mbytes_per_sec": 0 00:31:36.606 }, 00:31:36.606 "claimed": true, 00:31:36.606 "claim_type": "exclusive_write", 00:31:36.606 "zoned": false, 00:31:36.607 "supported_io_types": { 00:31:36.607 "read": true, 00:31:36.607 "write": true, 00:31:36.607 "unmap": true, 00:31:36.607 "flush": true, 00:31:36.607 "reset": true, 00:31:36.607 "nvme_admin": false, 00:31:36.607 "nvme_io": false, 00:31:36.607 "nvme_io_md": false, 00:31:36.607 "write_zeroes": true, 00:31:36.607 "zcopy": true, 00:31:36.607 "get_zone_info": false, 00:31:36.607 "zone_management": false, 00:31:36.607 "zone_append": false, 00:31:36.607 "compare": false, 00:31:36.607 "compare_and_write": false, 00:31:36.607 "abort": true, 00:31:36.607 "seek_hole": false, 00:31:36.607 "seek_data": false, 00:31:36.607 "copy": true, 00:31:36.607 "nvme_iov_md": false 00:31:36.607 }, 00:31:36.607 "memory_domains": [ 00:31:36.607 { 00:31:36.607 "dma_device_id": "system", 00:31:36.607 "dma_device_type": 1 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.607 "dma_device_type": 2 00:31:36.607 } 00:31:36.607 ], 00:31:36.607 "driver_specific": {} 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "name": "BaseBdev1", 00:31:36.607 "aliases": [ 00:31:36.607 "8f05c237-c78e-5584-aad6-277452cad634" 00:31:36.607 ], 00:31:36.607 "product_name": "passthru", 00:31:36.607 "block_size": 512, 00:31:36.607 "num_blocks": 65536, 00:31:36.607 "uuid": "8f05c237-c78e-5584-aad6-277452cad634", 00:31:36.607 "assigned_rate_limits": { 00:31:36.607 "rw_ios_per_sec": 0, 00:31:36.607 "rw_mbytes_per_sec": 0, 00:31:36.607 "r_mbytes_per_sec": 0, 00:31:36.607 "w_mbytes_per_sec": 0 00:31:36.607 }, 00:31:36.607 "claimed": true, 00:31:36.607 "claim_type": "exclusive_write", 00:31:36.607 "zoned": false, 00:31:36.607 "supported_io_types": { 00:31:36.607 "read": true, 00:31:36.607 "write": true, 00:31:36.607 "unmap": true, 00:31:36.607 "flush": true, 00:31:36.607 "reset": true, 00:31:36.607 "nvme_admin": false, 00:31:36.607 "nvme_io": false, 00:31:36.607 "nvme_io_md": false, 00:31:36.607 "write_zeroes": true, 00:31:36.607 "zcopy": true, 00:31:36.607 "get_zone_info": false, 00:31:36.607 "zone_management": false, 00:31:36.607 "zone_append": false, 00:31:36.607 "compare": false, 00:31:36.607 "compare_and_write": false, 00:31:36.607 "abort": true, 00:31:36.607 "seek_hole": false, 00:31:36.607 "seek_data": false, 00:31:36.607 "copy": true, 00:31:36.607 "nvme_iov_md": false 00:31:36.607 }, 00:31:36.607 "memory_domains": [ 00:31:36.607 { 00:31:36.607 "dma_device_id": "system", 00:31:36.607 "dma_device_type": 1 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.607 "dma_device_type": 2 00:31:36.607 } 00:31:36.607 ], 00:31:36.607 "driver_specific": { 00:31:36.607 "passthru": { 00:31:36.607 "name": "BaseBdev1", 00:31:36.607 "base_bdev_name": "BaseBdev1_malloc" 00:31:36.607 } 00:31:36.607 } 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "name": "BaseBdev2_malloc", 00:31:36.607 "aliases": [ 00:31:36.607 "6a863614-6e23-41be-87f2-fa091fe11e82" 00:31:36.607 ], 00:31:36.607 "product_name": "Malloc disk", 00:31:36.607 "block_size": 512, 00:31:36.607 "num_blocks": 65536, 00:31:36.607 "uuid": "6a863614-6e23-41be-87f2-fa091fe11e82", 00:31:36.607 "assigned_rate_limits": { 00:31:36.607 "rw_ios_per_sec": 0, 00:31:36.607 "rw_mbytes_per_sec": 0, 00:31:36.607 "r_mbytes_per_sec": 0, 00:31:36.607 "w_mbytes_per_sec": 0 00:31:36.607 }, 00:31:36.607 "claimed": true, 00:31:36.607 "claim_type": "exclusive_write", 00:31:36.607 "zoned": false, 00:31:36.607 "supported_io_types": { 00:31:36.607 "read": true, 00:31:36.607 "write": true, 00:31:36.607 "unmap": true, 00:31:36.607 "flush": true, 00:31:36.607 "reset": true, 00:31:36.607 "nvme_admin": false, 00:31:36.607 "nvme_io": false, 00:31:36.607 "nvme_io_md": false, 00:31:36.607 "write_zeroes": true, 00:31:36.607 "zcopy": true, 00:31:36.607 "get_zone_info": false, 00:31:36.607 "zone_management": false, 00:31:36.607 "zone_append": false, 00:31:36.607 "compare": false, 00:31:36.607 "compare_and_write": false, 00:31:36.607 "abort": true, 00:31:36.607 "seek_hole": false, 00:31:36.607 "seek_data": false, 00:31:36.607 "copy": true, 00:31:36.607 "nvme_iov_md": false 00:31:36.607 }, 00:31:36.607 "memory_domains": [ 00:31:36.607 { 00:31:36.607 "dma_device_id": "system", 00:31:36.607 "dma_device_type": 1 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.607 "dma_device_type": 2 00:31:36.607 } 00:31:36.607 ], 00:31:36.607 "driver_specific": {} 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "name": "BaseBdev2", 00:31:36.607 "aliases": [ 00:31:36.607 "a9861eee-8284-5dd6-b687-055577389f14" 00:31:36.607 ], 00:31:36.607 "product_name": "passthru", 00:31:36.607 "block_size": 512, 00:31:36.607 "num_blocks": 65536, 00:31:36.607 "uuid": "a9861eee-8284-5dd6-b687-055577389f14", 00:31:36.607 "assigned_rate_limits": { 00:31:36.607 "rw_ios_per_sec": 0, 00:31:36.607 "rw_mbytes_per_sec": 0, 00:31:36.607 "r_mbytes_per_sec": 0, 00:31:36.607 "w_mbytes_per_sec": 0 00:31:36.607 }, 00:31:36.607 "claimed": true, 00:31:36.607 "claim_type": "exclusive_write", 00:31:36.607 "zoned": false, 00:31:36.607 "supported_io_types": { 00:31:36.607 "read": true, 00:31:36.607 "write": true, 00:31:36.607 "unmap": true, 00:31:36.607 "flush": true, 00:31:36.607 "reset": true, 00:31:36.607 "nvme_admin": false, 00:31:36.607 "nvme_io": false, 00:31:36.607 "nvme_io_md": false, 00:31:36.607 "write_zeroes": true, 00:31:36.607 "zcopy": true, 00:31:36.607 "get_zone_info": false, 00:31:36.607 "zone_management": false, 00:31:36.607 "zone_append": false, 00:31:36.607 "compare": false, 00:31:36.607 "compare_and_write": false, 00:31:36.607 "abort": true, 00:31:36.607 "seek_hole": false, 00:31:36.607 "seek_data": false, 00:31:36.607 "copy": true, 00:31:36.607 "nvme_iov_md": false 00:31:36.607 }, 00:31:36.607 "memory_domains": [ 00:31:36.607 { 00:31:36.607 "dma_device_id": "system", 00:31:36.607 "dma_device_type": 1 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.607 "dma_device_type": 2 00:31:36.607 } 00:31:36.607 ], 00:31:36.607 "driver_specific": { 00:31:36.607 "passthru": { 00:31:36.607 "name": "BaseBdev2", 00:31:36.607 "base_bdev_name": "BaseBdev2_malloc" 00:31:36.607 } 00:31:36.607 } 00:31:36.607 }, 00:31:36.607 { 00:31:36.607 "name": "raid_bdev1", 00:31:36.607 "aliases": [ 00:31:36.607 "66219653-3a83-4a3f-8d72-231799ecedb9" 00:31:36.607 ], 00:31:36.607 "product_name": "Raid Volume", 00:31:36.607 "block_size": 512, 00:31:36.607 "num_blocks": 65536, 00:31:36.607 "uuid": "66219653-3a83-4a3f-8d72-231799ecedb9", 00:31:36.607 "assigned_rate_limits": { 00:31:36.607 "rw_ios_per_sec": 0, 00:31:36.607 "rw_mbytes_per_sec": 0, 00:31:36.607 "r_mbytes_per_sec": 0, 00:31:36.607 "w_mbytes_per_sec": 0 00:31:36.607 }, 00:31:36.607 "claimed": false, 00:31:36.607 "zoned": false, 00:31:36.607 "supported_io_types": { 00:31:36.607 "read": true, 00:31:36.607 "write": true, 00:31:36.607 "unmap": false, 00:31:36.607 "flush": false, 00:31:36.607 "reset": true, 00:31:36.607 "nvme_admin": false, 00:31:36.607 "nvme_io": false, 00:31:36.607 "nvme_io_md": false, 00:31:36.607 "write_zeroes": true, 00:31:36.607 "zcopy": false, 00:31:36.607 "get_zone_info": false, 00:31:36.608 "zone_management": false, 00:31:36.608 "zone_append": false, 00:31:36.608 "compare": false, 00:31:36.608 "compare_and_write": false, 00:31:36.608 "abort": false, 00:31:36.608 "seek_hole": false, 00:31:36.608 "seek_data": false, 00:31:36.608 "copy": false, 00:31:36.608 "nvme_iov_md": false 00:31:36.608 }, 00:31:36.608 "memory_domains": [ 00:31:36.608 { 00:31:36.608 "dma_device_id": "system", 00:31:36.608 "dma_device_type": 1 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.608 "dma_device_type": 2 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "dma_device_id": "system", 00:31:36.608 "dma_device_type": 1 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.608 "dma_device_type": 2 00:31:36.608 } 00:31:36.608 ], 00:31:36.608 "driver_specific": { 00:31:36.608 "raid": { 00:31:36.608 "uuid": "66219653-3a83-4a3f-8d72-231799ecedb9", 00:31:36.608 "strip_size_kb": 0, 00:31:36.608 "state": "online", 00:31:36.608 "raid_level": "raid1", 00:31:36.608 "superblock": false, 00:31:36.608 "num_base_bdevs": 2, 00:31:36.608 "num_base_bdevs_discovered": 2, 00:31:36.608 "num_base_bdevs_operational": 2, 00:31:36.608 "base_bdevs_list": [ 00:31:36.608 { 00:31:36.608 "name": "spare", 00:31:36.608 "uuid": "7a077c24-ebb8-5948-bf0d-0b3a01b6391e", 00:31:36.608 "is_configured": true, 00:31:36.608 "data_offset": 0, 00:31:36.608 "data_size": 65536 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "name": "BaseBdev2", 00:31:36.608 "uuid": "a9861eee-8284-5dd6-b687-055577389f14", 00:31:36.608 "is_configured": true, 00:31:36.608 "data_offset": 0, 00:31:36.608 "data_size": 65536 00:31:36.608 } 00:31:36.608 ] 00:31:36.608 } 00:31:36.608 } 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "name": "spare_delay", 00:31:36.608 "aliases": [ 00:31:36.608 "435538cb-babe-5113-8551-851b2adfa9ec" 00:31:36.608 ], 00:31:36.608 "product_name": "delay", 00:31:36.608 "block_size": 512, 00:31:36.608 "num_blocks": 65536, 00:31:36.608 "uuid": "435538cb-babe-5113-8551-851b2adfa9ec", 00:31:36.608 "assigned_rate_limits": { 00:31:36.608 "rw_ios_per_sec": 0, 00:31:36.608 "rw_mbytes_per_sec": 0, 00:31:36.608 "r_mbytes_per_sec": 0, 00:31:36.608 "w_mbytes_per_sec": 0 00:31:36.608 }, 00:31:36.608 "claimed": true, 00:31:36.608 "claim_type": "exclusive_write", 00:31:36.608 "zoned": false, 00:31:36.608 "supported_io_types": { 00:31:36.608 "read": true, 00:31:36.608 "write": true, 00:31:36.608 "unmap": true, 00:31:36.608 "flush": true, 00:31:36.608 "reset": true, 00:31:36.608 "nvme_admin": false, 00:31:36.608 "nvme_io": false, 00:31:36.608 "nvme_io_md": false, 00:31:36.608 "write_zeroes": true, 00:31:36.608 "zcopy": true, 00:31:36.608 "get_zone_info": false, 00:31:36.608 "zone_management": false, 00:31:36.608 "zone_append": false, 00:31:36.608 "compare": false, 00:31:36.608 "compare_and_write": false, 00:31:36.608 "abort": true, 00:31:36.608 "seek_hole": false, 00:31:36.608 "seek_data": false, 00:31:36.608 "copy": true, 00:31:36.608 "nvme_iov_md": false 00:31:36.608 }, 00:31:36.608 "memory_domains": [ 00:31:36.608 { 00:31:36.608 "dma_device_id": "system", 00:31:36.608 "dma_device_type": 1 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.608 "dma_device_type": 2 00:31:36.608 } 00:31:36.608 ], 00:31:36.608 "driver_specific": { 00:31:36.608 "delay": { 00:31:36.608 "name": "spare_delay", 00:31:36.608 "base_bdev_name": "BaseBdev1", 00:31:36.608 "uuid": "435538cb-babe-5113-8551-851b2adfa9ec", 00:31:36.608 "avg_read_latency": 0, 00:31:36.608 "p99_read_latency": 0, 00:31:36.608 "avg_write_latency": 100000, 00:31:36.608 "p99_write_latency": 100000 00:31:36.608 } 00:31:36.608 } 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "name": "spare", 00:31:36.608 "aliases": [ 00:31:36.608 "7a077c24-ebb8-5948-bf0d-0b3a01b6391e" 00:31:36.608 ], 00:31:36.608 "product_name": "passthru", 00:31:36.608 "block_size": 512, 00:31:36.608 "num_blocks": 65536, 00:31:36.608 "uuid": "7a077c24-ebb8-5948-bf0d-0b3a01b6391e", 00:31:36.608 "assigned_rate_limits": { 00:31:36.608 "rw_ios_per_sec": 0, 00:31:36.608 "rw_mbytes_per_sec": 0, 00:31:36.608 "r_mbytes_per_sec": 0, 00:31:36.608 "w_mbytes_per_sec": 0 00:31:36.608 }, 00:31:36.608 "claimed": true, 00:31:36.608 "claim_type": "exclusive_write", 00:31:36.608 "zoned": false, 00:31:36.608 "supported_io_types": { 00:31:36.608 "read": true, 00:31:36.608 "write": true, 00:31:36.608 "unmap": true, 00:31:36.608 "flush": true, 00:31:36.608 "reset": true, 00:31:36.608 "nvme_admin": false, 00:31:36.608 "nvme_io": false, 00:31:36.608 "nvme_io_md": false, 00:31:36.608 "write_zeroes": true, 00:31:36.608 "zcopy": true, 00:31:36.608 "get_zone_info": false, 00:31:36.608 "zone_management": false, 00:31:36.608 "zone_append": false, 00:31:36.608 "compare": false, 00:31:36.608 "compare_and_write": false, 00:31:36.608 "abort": true, 00:31:36.608 "seek_hole": false, 00:31:36.608 "seek_data": false, 00:31:36.608 "copy": true, 00:31:36.608 "nvme_iov_md": false 00:31:36.608 }, 00:31:36.608 "memory_domains": [ 00:31:36.608 { 00:31:36.608 "dma_device_id": "system", 00:31:36.608 "dma_device_type": 1 00:31:36.608 }, 00:31:36.608 { 00:31:36.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.608 "dma_device_type": 2 00:31:36.608 } 00:31:36.608 ], 00:31:36.608 "driver_specific": { 00:31:36.608 "passthru": { 00:31:36.608 "name": "spare", 00:31:36.608 "base_bdev_name": "spare_delay" 00:31:36.608 } 00:31:36.608 } 00:31:36.608 } 00:31:36.608 ] 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@844 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.608 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.866 "name": "raid_bdev1", 00:31:36.866 "uuid": "66219653-3a83-4a3f-8d72-231799ecedb9", 00:31:36.866 "strip_size_kb": 0, 00:31:36.866 "state": "online", 00:31:36.866 "raid_level": "raid1", 00:31:36.866 "superblock": false, 00:31:36.866 "num_base_bdevs": 2, 00:31:36.866 "num_base_bdevs_discovered": 2, 00:31:36.866 "num_base_bdevs_operational": 2, 00:31:36.866 "base_bdevs_list": [ 00:31:36.866 { 00:31:36.866 "name": "spare", 00:31:36.866 "uuid": "7a077c24-ebb8-5948-bf0d-0b3a01b6391e", 00:31:36.866 "is_configured": true, 00:31:36.866 "data_offset": 0, 00:31:36.866 "data_size": 65536 00:31:36.866 }, 00:31:36.866 { 00:31:36.866 "name": "BaseBdev2", 00:31:36.866 "uuid": "a9861eee-8284-5dd6-b687-055577389f14", 00:31:36.866 "is_configured": true, 00:31:36.866 "data_offset": 0, 00:31:36.866 "data_size": 65536 00:31:36.866 } 00:31:36.866 ] 00:31:36.866 }' 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:36.866 14:14:25 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@847 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests -t 1 00:31:37.123 [2024-07-25 14:14:25.936786] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:37.123 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:37.123 Zero copy mechanism will not be used. 00:31:37.123 Running I/O for 1 seconds... 00:31:38.058 00:31:38.058 Latency(us) 00:31:38.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.058 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:38.058 raid_bdev1 : 1.10 34.51 103.54 0.00 0.00 51726.67 350.02 111053.73 00:31:38.058 =================================================================================================================== 00:31:38.058 Total : 34.51 103.54 0.00 0.00 51726.67 350.02 111053.73 00:31:38.058 [2024-07-25 14:14:27.059397] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:38.058 0 00:31:38.058 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@850 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:38.317 [2024-07-25 14:14:27.359593] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:38.317 [2024-07-25 14:14:27.359644] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:38.317 [2024-07-25 14:14:27.359749] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:38.317 [2024-07-25 14:14:27.359825] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:38.317 [2024-07-25 14:14:27.359839] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:31:38.577 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@851 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.577 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@851 -- # jq length 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@851 -- # [[ 0 == 0 ]] 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@854 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@12 -- # local i 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:38.834 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:39.093 /dev/nbd0 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@869 -- # local i 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@873 -- # break 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:39.093 1+0 records in 00:31:39.093 1+0 records out 00:31:39.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482004 s, 8.5 MB/s 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@886 -- # size=4096 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:39.093 14:14:27 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@889 -- # return 0 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@855 -- # for bdev in "${base_bdevs[@]:1}" 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@856 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@12 -- # local i 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:39.093 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:31:39.351 /dev/nbd1 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@869 -- # local i 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@873 -- # break 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:39.351 1+0 records in 00:31:39.351 1+0 records out 00:31:39.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309154 s, 13.2 MB/s 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@886 -- # size=4096 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@889 -- # return 0 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:39.351 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@857 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:39.608 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@858 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:39.608 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:39.609 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:39.609 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:39.609 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@51 -- # local i 00:31:39.609 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:39.609 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@41 -- # break 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@860 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@51 -- # local i 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:39.866 14:14:28 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@41 -- # break 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/nbd_common.sh@45 -- # return 0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@863 -- # '[' false = true ']' 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@872 -- # killprocess 147242 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@950 -- # '[' -z 147242 ']' 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@954 -- # kill -0 147242 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@955 -- # uname 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 147242 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:40.123 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@968 -- # echo 'killing process with pid 147242' 00:31:40.124 killing process with pid 147242 00:31:40.124 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@969 -- # kill 147242 00:31:40.124 Received shutdown signal, test time was about 1.000000 seconds 00:31:40.124 00:31:40.124 Latency(us) 00:31:40.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.124 =================================================================================================================== 00:31:40.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.124 [2024-07-25 14:14:29.148989] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:40.124 14:14:29 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@974 -- # wait 147242 00:31:40.381 [2024-07-25 14:14:29.313932] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild -- bdev/bdev_raid.sh@874 -- # return 0 00:31:41.754 00:31:41.754 real 0m10.741s 00:31:41.754 user 0m16.712s 00:31:41.754 sys 0m1.625s 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild -- common/autotest_common.sh@10 -- # set +x 00:31:41.754 ************************************ 00:31:41.754 END TEST raid_add_bdev_without_rebuild 00:31:41.754 ************************************ 00:31:41.754 14:14:30 bdev_raid -- bdev/bdev_raid.sh@1036 -- # run_test raid_add_bdev_without_rebuild_sb raid_add_bdev_without_rebuild 2 true 00:31:41.754 14:14:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:41.754 14:14:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.754 14:14:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:41.754 ************************************ 00:31:41.754 START TEST raid_add_bdev_without_rebuild_sb 00:31:41.754 ************************************ 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1125 -- # raid_add_bdev_without_rebuild 2 true 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@806 -- # local superblock=true 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:31:41.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@809 -- # local strip_size=0 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@810 -- # local data_offset 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@813 -- # raid_pid=147524 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@814 -- # waitforlisten 147524 /var/tmp/spdk-raid.sock 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@812 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@831 -- # '[' -z 147524 ']' 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:41.754 14:14:30 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.754 [2024-07-25 14:14:30.541712] Starting SPDK v24.09-pre git sha1 50fa6ca31 / DPDK 24.03.0 initialization... 00:31:41.754 [2024-07-25 14:14:30.541934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147524 ] 00:31:41.754 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:41.754 Zero copy mechanism will not be used. 00:31:41.754 [2024-07-25 14:14:30.698683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.012 [2024-07-25 14:14:30.914239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.270 [2024-07-25 14:14:31.111134] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.559 14:14:31 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:42.559 14:14:31 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@864 -- # return 0 00:31:42.559 14:14:31 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@817 -- # for bdev in "${base_bdevs[@]}" 00:31:42.559 14:14:31 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@818 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:42.817 BaseBdev1_malloc 00:31:42.817 14:14:31 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:43.073 [2024-07-25 14:14:32.097757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:43.073 [2024-07-25 14:14:32.097926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.073 [2024-07-25 14:14:32.097973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:43.073 [2024-07-25 14:14:32.098004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.073 [2024-07-25 14:14:32.100601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.073 [2024-07-25 14:14:32.100658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:43.073 BaseBdev1 00:31:43.073 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@817 -- # for bdev in "${base_bdevs[@]}" 00:31:43.073 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@818 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:43.637 BaseBdev2_malloc 00:31:43.637 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:43.895 [2024-07-25 14:14:32.685501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:43.896 [2024-07-25 14:14:32.685641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.896 [2024-07-25 14:14:32.685687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:43.896 [2024-07-25 14:14:32.685711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.896 [2024-07-25 14:14:32.688222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.896 [2024-07-25 14:14:32.688280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:43.896 BaseBdev2 00:31:43.896 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@823 -- # '[' true = true ']' 00:31:43.896 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@823 -- # echo -s 00:31:43.896 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:31:44.177 [2024-07-25 14:14:32.945578] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:44.177 [2024-07-25 14:14:32.947768] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:44.177 [2024-07-25 14:14:32.947984] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:31:44.177 [2024-07-25 14:14:32.948009] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:44.177 [2024-07-25 14:14:32.948144] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:31:44.177 [2024-07-25 14:14:32.948577] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:31:44.177 [2024-07-25 14:14:32.948603] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:31:44.177 [2024-07-25 14:14:32.948789] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@824 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.177 14:14:32 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.177 14:14:33 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.177 "name": "raid_bdev1", 00:31:44.177 "uuid": "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd", 00:31:44.177 "strip_size_kb": 0, 00:31:44.177 "state": "online", 00:31:44.177 "raid_level": "raid1", 00:31:44.177 "superblock": true, 00:31:44.177 "num_base_bdevs": 2, 00:31:44.177 "num_base_bdevs_discovered": 2, 00:31:44.177 "num_base_bdevs_operational": 2, 00:31:44.177 "base_bdevs_list": [ 00:31:44.177 { 00:31:44.177 "name": "BaseBdev1", 00:31:44.177 "uuid": "dedff42d-949a-55b8-b0da-5f97b7960f98", 00:31:44.177 "is_configured": true, 00:31:44.177 "data_offset": 2048, 00:31:44.177 "data_size": 63488 00:31:44.177 }, 00:31:44.177 { 00:31:44.177 "name": "BaseBdev2", 00:31:44.177 "uuid": "aa1e393e-c511-5353-a167-93c9312d9945", 00:31:44.177 "is_configured": true, 00:31:44.177 "data_offset": 2048, 00:31:44.177 "data_size": 63488 00:31:44.177 } 00:31:44.177 ] 00:31:44.177 }' 00:31:44.177 14:14:33 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.177 14:14:33 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.111 14:14:33 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.111 14:14:33 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@827 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:45.111 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@827 -- # data_offset=2048 00:31:45.111 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:45.370 [2024-07-25 14:14:34.365899] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:45.370 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@833 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b BaseBdev1 -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:45.628 [2024-07-25 14:14:34.634657] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare_delay 00:31:45.628 [2024-07-25 14:14:34.634706] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare_delay (1) smaller than existing raid bdev raid_bdev1 (2) 00:31:45.628 [2024-07-25 14:14:34.634717] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:45.628 spare_delay 00:31:45.628 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@834 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:45.886 [2024-07-25 14:14:34.918004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:45.886 [2024-07-25 14:14:34.918137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.886 [2024-07-25 14:14:34.918186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:45.886 [2024-07-25 14:14:34.918217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.886 [2024-07-25 14:14:34.918851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.886 [2024-07-25 14:14:34.918903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:45.886 [2024-07-25 14:14:34.919031] bdev_raid.c:3953:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:45.886 [2024-07-25 14:14:34.919049] bdev_raid.c:3758:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (1) smaller than existing raid bdev raid_bdev1 (2) 00:31:45.886 [2024-07-25 14:14:34.919056] bdev_raid.c:3777:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:45.886 spare 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.143 14:14:34 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.143 14:14:35 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:46.143 "name": "raid_bdev1", 00:31:46.143 "uuid": "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd", 00:31:46.143 "strip_size_kb": 0, 00:31:46.143 "state": "online", 00:31:46.143 "raid_level": "raid1", 00:31:46.143 "superblock": true, 00:31:46.143 "num_base_bdevs": 2, 00:31:46.143 "num_base_bdevs_discovered": 1, 00:31:46.143 "num_base_bdevs_operational": 1, 00:31:46.143 "base_bdevs_list": [ 00:31:46.143 { 00:31:46.143 "name": null, 00:31:46.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.143 "is_configured": false, 00:31:46.143 "data_offset": 2048, 00:31:46.143 "data_size": 63488 00:31:46.143 }, 00:31:46.143 { 00:31:46.143 "name": "BaseBdev2", 00:31:46.143 "uuid": "aa1e393e-c511-5353-a167-93c9312d9945", 00:31:46.143 "is_configured": true, 00:31:46.143 "data_offset": 2048, 00:31:46.143 "data_size": 63488 00:31:46.143 } 00:31:46.143 ] 00:31:46.143 }' 00:31:46.143 14:14:35 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:46.143 14:14:35 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.076 14:14:35 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@840 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare -s 00:31:47.076 [2024-07-25 14:14:36.098352] bdev_raid.c:3386:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:47.334 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@841 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -t 1000 00:31:47.593 [2024-07-25 14:14:36.507661] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:47.593 [ 00:31:47.593 { 00:31:47.593 "name": "BaseBdev1_malloc", 00:31:47.593 "aliases": [ 00:31:47.593 "84544ab6-ee6b-48fa-a790-d940a7da6b33" 00:31:47.593 ], 00:31:47.593 "product_name": "Malloc disk", 00:31:47.593 "block_size": 512, 00:31:47.593 "num_blocks": 65536, 00:31:47.593 "uuid": "84544ab6-ee6b-48fa-a790-d940a7da6b33", 00:31:47.593 "assigned_rate_limits": { 00:31:47.593 "rw_ios_per_sec": 0, 00:31:47.593 "rw_mbytes_per_sec": 0, 00:31:47.593 "r_mbytes_per_sec": 0, 00:31:47.593 "w_mbytes_per_sec": 0 00:31:47.593 }, 00:31:47.593 "claimed": true, 00:31:47.593 "claim_type": "exclusive_write", 00:31:47.593 "zoned": false, 00:31:47.593 "supported_io_types": { 00:31:47.593 "read": true, 00:31:47.593 "write": true, 00:31:47.593 "unmap": true, 00:31:47.593 "flush": true, 00:31:47.593 "reset": true, 00:31:47.593 "nvme_admin": false, 00:31:47.593 "nvme_io": false, 00:31:47.593 "nvme_io_md": false, 00:31:47.593 "write_zeroes": true, 00:31:47.593 "zcopy": true, 00:31:47.593 "get_zone_info": false, 00:31:47.593 "zone_management": false, 00:31:47.593 "zone_append": false, 00:31:47.593 "compare": false, 00:31:47.593 "compare_and_write": false, 00:31:47.593 "abort": true, 00:31:47.593 "seek_hole": false, 00:31:47.593 "seek_data": false, 00:31:47.593 "copy": true, 00:31:47.593 "nvme_iov_md": false 00:31:47.593 }, 00:31:47.593 "memory_domains": [ 00:31:47.593 { 00:31:47.593 "dma_device_id": "system", 00:31:47.593 "dma_device_type": 1 00:31:47.593 }, 00:31:47.593 { 00:31:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.593 "dma_device_type": 2 00:31:47.593 } 00:31:47.593 ], 00:31:47.593 "driver_specific": {} 00:31:47.593 }, 00:31:47.593 { 00:31:47.593 "name": "BaseBdev1", 00:31:47.593 "aliases": [ 00:31:47.593 "dedff42d-949a-55b8-b0da-5f97b7960f98" 00:31:47.593 ], 00:31:47.593 "product_name": "passthru", 00:31:47.593 "block_size": 512, 00:31:47.593 "num_blocks": 65536, 00:31:47.593 "uuid": "dedff42d-949a-55b8-b0da-5f97b7960f98", 00:31:47.593 "assigned_rate_limits": { 00:31:47.593 "rw_ios_per_sec": 0, 00:31:47.593 "rw_mbytes_per_sec": 0, 00:31:47.593 "r_mbytes_per_sec": 0, 00:31:47.593 "w_mbytes_per_sec": 0 00:31:47.593 }, 00:31:47.593 "claimed": true, 00:31:47.593 "claim_type": "exclusive_write", 00:31:47.593 "zoned": false, 00:31:47.593 "supported_io_types": { 00:31:47.593 "read": true, 00:31:47.593 "write": true, 00:31:47.593 "unmap": true, 00:31:47.593 "flush": true, 00:31:47.593 "reset": true, 00:31:47.593 "nvme_admin": false, 00:31:47.593 "nvme_io": false, 00:31:47.593 "nvme_io_md": false, 00:31:47.593 "write_zeroes": true, 00:31:47.593 "zcopy": true, 00:31:47.593 "get_zone_info": false, 00:31:47.593 "zone_management": false, 00:31:47.593 "zone_append": false, 00:31:47.593 "compare": false, 00:31:47.593 "compare_and_write": false, 00:31:47.593 "abort": true, 00:31:47.593 "seek_hole": false, 00:31:47.593 "seek_data": false, 00:31:47.593 "copy": true, 00:31:47.593 "nvme_iov_md": false 00:31:47.593 }, 00:31:47.593 "memory_domains": [ 00:31:47.593 { 00:31:47.593 "dma_device_id": "system", 00:31:47.593 "dma_device_type": 1 00:31:47.593 }, 00:31:47.593 { 00:31:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.594 "dma_device_type": 2 00:31:47.594 } 00:31:47.594 ], 00:31:47.594 "driver_specific": { 00:31:47.594 "passthru": { 00:31:47.594 "name": "BaseBdev1", 00:31:47.594 "base_bdev_name": "BaseBdev1_malloc" 00:31:47.594 } 00:31:47.594 } 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "name": "BaseBdev2_malloc", 00:31:47.594 "aliases": [ 00:31:47.594 "0de858a8-f0a2-4925-a823-0d12f7e446dd" 00:31:47.594 ], 00:31:47.594 "product_name": "Malloc disk", 00:31:47.594 "block_size": 512, 00:31:47.594 "num_blocks": 65536, 00:31:47.594 "uuid": "0de858a8-f0a2-4925-a823-0d12f7e446dd", 00:31:47.594 "assigned_rate_limits": { 00:31:47.594 "rw_ios_per_sec": 0, 00:31:47.594 "rw_mbytes_per_sec": 0, 00:31:47.594 "r_mbytes_per_sec": 0, 00:31:47.594 "w_mbytes_per_sec": 0 00:31:47.594 }, 00:31:47.594 "claimed": true, 00:31:47.594 "claim_type": "exclusive_write", 00:31:47.594 "zoned": false, 00:31:47.594 "supported_io_types": { 00:31:47.594 "read": true, 00:31:47.594 "write": true, 00:31:47.594 "unmap": true, 00:31:47.594 "flush": true, 00:31:47.594 "reset": true, 00:31:47.594 "nvme_admin": false, 00:31:47.594 "nvme_io": false, 00:31:47.594 "nvme_io_md": false, 00:31:47.594 "write_zeroes": true, 00:31:47.594 "zcopy": true, 00:31:47.594 "get_zone_info": false, 00:31:47.594 "zone_management": false, 00:31:47.594 "zone_append": false, 00:31:47.594 "compare": false, 00:31:47.594 "compare_and_write": false, 00:31:47.594 "abort": true, 00:31:47.594 "seek_hole": false, 00:31:47.594 "seek_data": false, 00:31:47.594 "copy": true, 00:31:47.594 "nvme_iov_md": false 00:31:47.594 }, 00:31:47.594 "memory_domains": [ 00:31:47.594 { 00:31:47.594 "dma_device_id": "system", 00:31:47.594 "dma_device_type": 1 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.594 "dma_device_type": 2 00:31:47.594 } 00:31:47.594 ], 00:31:47.594 "driver_specific": {} 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "name": "BaseBdev2", 00:31:47.594 "aliases": [ 00:31:47.594 "aa1e393e-c511-5353-a167-93c9312d9945" 00:31:47.594 ], 00:31:47.594 "product_name": "passthru", 00:31:47.594 "block_size": 512, 00:31:47.594 "num_blocks": 65536, 00:31:47.594 "uuid": "aa1e393e-c511-5353-a167-93c9312d9945", 00:31:47.594 "assigned_rate_limits": { 00:31:47.594 "rw_ios_per_sec": 0, 00:31:47.594 "rw_mbytes_per_sec": 0, 00:31:47.594 "r_mbytes_per_sec": 0, 00:31:47.594 "w_mbytes_per_sec": 0 00:31:47.594 }, 00:31:47.594 "claimed": true, 00:31:47.594 "claim_type": "exclusive_write", 00:31:47.594 "zoned": false, 00:31:47.594 "supported_io_types": { 00:31:47.594 "read": true, 00:31:47.594 "write": true, 00:31:47.594 "unmap": true, 00:31:47.594 "flush": true, 00:31:47.594 "reset": true, 00:31:47.594 "nvme_admin": false, 00:31:47.594 "nvme_io": false, 00:31:47.594 "nvme_io_md": false, 00:31:47.594 "write_zeroes": true, 00:31:47.594 "zcopy": true, 00:31:47.594 "get_zone_info": false, 00:31:47.594 "zone_management": false, 00:31:47.594 "zone_append": false, 00:31:47.594 "compare": false, 00:31:47.594 "compare_and_write": false, 00:31:47.594 "abort": true, 00:31:47.594 "seek_hole": false, 00:31:47.594 "seek_data": false, 00:31:47.594 "copy": true, 00:31:47.594 "nvme_iov_md": false 00:31:47.594 }, 00:31:47.594 "memory_domains": [ 00:31:47.594 { 00:31:47.594 "dma_device_id": "system", 00:31:47.594 "dma_device_type": 1 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.594 "dma_device_type": 2 00:31:47.594 } 00:31:47.594 ], 00:31:47.594 "driver_specific": { 00:31:47.594 "passthru": { 00:31:47.594 "name": "BaseBdev2", 00:31:47.594 "base_bdev_name": "BaseBdev2_malloc" 00:31:47.594 } 00:31:47.594 } 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "name": "raid_bdev1", 00:31:47.594 "aliases": [ 00:31:47.594 "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd" 00:31:47.594 ], 00:31:47.594 "product_name": "Raid Volume", 00:31:47.594 "block_size": 512, 00:31:47.594 "num_blocks": 63488, 00:31:47.594 "uuid": "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd", 00:31:47.594 "assigned_rate_limits": { 00:31:47.594 "rw_ios_per_sec": 0, 00:31:47.594 "rw_mbytes_per_sec": 0, 00:31:47.594 "r_mbytes_per_sec": 0, 00:31:47.594 "w_mbytes_per_sec": 0 00:31:47.594 }, 00:31:47.594 "claimed": false, 00:31:47.594 "zoned": false, 00:31:47.594 "supported_io_types": { 00:31:47.594 "read": true, 00:31:47.594 "write": true, 00:31:47.594 "unmap": false, 00:31:47.594 "flush": false, 00:31:47.594 "reset": true, 00:31:47.594 "nvme_admin": false, 00:31:47.594 "nvme_io": false, 00:31:47.594 "nvme_io_md": false, 00:31:47.594 "write_zeroes": true, 00:31:47.594 "zcopy": false, 00:31:47.594 "get_zone_info": false, 00:31:47.594 "zone_management": false, 00:31:47.594 "zone_append": false, 00:31:47.594 "compare": false, 00:31:47.594 "compare_and_write": false, 00:31:47.594 "abort": false, 00:31:47.594 "seek_hole": false, 00:31:47.594 "seek_data": false, 00:31:47.594 "copy": false, 00:31:47.594 "nvme_iov_md": false 00:31:47.594 }, 00:31:47.594 "memory_domains": [ 00:31:47.594 { 00:31:47.594 "dma_device_id": "system", 00:31:47.594 "dma_device_type": 1 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.594 "dma_device_type": 2 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "dma_device_id": "system", 00:31:47.594 "dma_device_type": 1 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.594 "dma_device_type": 2 00:31:47.594 } 00:31:47.594 ], 00:31:47.594 "driver_specific": { 00:31:47.594 "raid": { 00:31:47.594 "uuid": "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd", 00:31:47.594 "strip_size_kb": 0, 00:31:47.594 "state": "online", 00:31:47.594 "raid_level": "raid1", 00:31:47.594 "superblock": true, 00:31:47.594 "num_base_bdevs": 2, 00:31:47.594 "num_base_bdevs_discovered": 2, 00:31:47.594 "num_base_bdevs_operational": 2, 00:31:47.594 "base_bdevs_list": [ 00:31:47.594 { 00:31:47.594 "name": "spare", 00:31:47.594 "uuid": "11da862c-a7dc-5383-a0db-1491244545a4", 00:31:47.594 "is_configured": true, 00:31:47.594 "data_offset": 2048, 00:31:47.594 "data_size": 63488 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "name": "BaseBdev2", 00:31:47.594 "uuid": "aa1e393e-c511-5353-a167-93c9312d9945", 00:31:47.594 "is_configured": true, 00:31:47.594 "data_offset": 2048, 00:31:47.594 "data_size": 63488 00:31:47.594 } 00:31:47.594 ] 00:31:47.594 } 00:31:47.594 } 00:31:47.594 }, 00:31:47.594 { 00:31:47.594 "name": "spare_delay", 00:31:47.594 "aliases": [ 00:31:47.594 "b3306681-3222-5abd-b60e-f6fad24ba9d8" 00:31:47.594 ], 00:31:47.594 "product_name": "delay", 00:31:47.594 "block_size": 512, 00:31:47.594 "num_blocks": 65536, 00:31:47.594 "uuid": "b3306681-3222-5abd-b60e-f6fad24ba9d8", 00:31:47.594 "assigned_rate_limits": { 00:31:47.594 "rw_ios_per_sec": 0, 00:31:47.594 "rw_mbytes_per_sec": 0, 00:31:47.594 "r_mbytes_per_sec": 0, 00:31:47.594 "w_mbytes_per_sec": 0 00:31:47.594 }, 00:31:47.594 "claimed": true, 00:31:47.594 "claim_type": "exclusive_write", 00:31:47.594 "zoned": false, 00:31:47.594 "supported_io_types": { 00:31:47.594 "read": true, 00:31:47.595 "write": true, 00:31:47.595 "unmap": true, 00:31:47.595 "flush": true, 00:31:47.595 "reset": true, 00:31:47.595 "nvme_admin": false, 00:31:47.595 "nvme_io": false, 00:31:47.595 "nvme_io_md": false, 00:31:47.595 "write_zeroes": true, 00:31:47.595 "zcopy": true, 00:31:47.595 "get_zone_info": false, 00:31:47.595 "zone_management": false, 00:31:47.595 "zone_append": false, 00:31:47.595 "compare": false, 00:31:47.595 "compare_and_write": false, 00:31:47.595 "abort": true, 00:31:47.595 "seek_hole": false, 00:31:47.595 "seek_data": false, 00:31:47.595 "copy": true, 00:31:47.595 "nvme_iov_md": false 00:31:47.595 }, 00:31:47.595 "memory_domains": [ 00:31:47.595 { 00:31:47.595 "dma_device_id": "system", 00:31:47.595 "dma_device_type": 1 00:31:47.595 }, 00:31:47.595 { 00:31:47.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.595 "dma_device_type": 2 00:31:47.595 } 00:31:47.595 ], 00:31:47.595 "driver_specific": { 00:31:47.595 "delay": { 00:31:47.595 "name": "spare_delay", 00:31:47.595 "base_bdev_name": "BaseBdev1", 00:31:47.595 "uuid": "b3306681-3222-5abd-b60e-f6fad24ba9d8", 00:31:47.595 "avg_read_latency": 0, 00:31:47.595 "p99_read_latency": 0, 00:31:47.595 "avg_write_latency": 100000, 00:31:47.595 "p99_write_latency": 100000 00:31:47.595 } 00:31:47.595 } 00:31:47.595 }, 00:31:47.595 { 00:31:47.595 "name": "spare", 00:31:47.595 "aliases": [ 00:31:47.595 "11da862c-a7dc-5383-a0db-1491244545a4" 00:31:47.595 ], 00:31:47.595 "product_name": "passthru", 00:31:47.595 "block_size": 512, 00:31:47.595 "num_blocks": 65536, 00:31:47.595 "uuid": "11da862c-a7dc-5383-a0db-1491244545a4", 00:31:47.595 "assigned_rate_limits": { 00:31:47.595 "rw_ios_per_sec": 0, 00:31:47.595 "rw_mbytes_per_sec": 0, 00:31:47.595 "r_mbytes_per_sec": 0, 00:31:47.595 "w_mbytes_per_sec": 0 00:31:47.595 }, 00:31:47.595 "claimed": true, 00:31:47.595 "claim_type": "exclusive_write", 00:31:47.595 "zoned": false, 00:31:47.595 "supported_io_types": { 00:31:47.595 "read": true, 00:31:47.595 "write": true, 00:31:47.595 "unmap": true, 00:31:47.595 "flush": true, 00:31:47.595 "reset": true, 00:31:47.595 "nvme_admin": false, 00:31:47.595 "nvme_io": false, 00:31:47.595 "nvme_io_md": false, 00:31:47.595 "write_zeroes": true, 00:31:47.595 "zcopy": true, 00:31:47.595 "get_zone_info": false, 00:31:47.595 "zone_management": false, 00:31:47.595 "zone_append": false, 00:31:47.595 "compare": false, 00:31:47.595 "compare_and_write": false, 00:31:47.595 "abort": true, 00:31:47.595 "seek_hole": false, 00:31:47.595 "seek_data": false, 00:31:47.595 "copy": true, 00:31:47.595 "nvme_iov_md": false 00:31:47.595 }, 00:31:47.595 "memory_domains": [ 00:31:47.595 { 00:31:47.595 "dma_device_id": "system", 00:31:47.595 "dma_device_type": 1 00:31:47.595 }, 00:31:47.595 { 00:31:47.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.595 "dma_device_type": 2 00:31:47.595 } 00:31:47.595 ], 00:31:47.595 "driver_specific": { 00:31:47.595 "passthru": { 00:31:47.595 "name": "spare", 00:31:47.595 "base_bdev_name": "spare_delay" 00:31:47.595 } 00:31:47.595 } 00:31:47.595 } 00:31:47.595 ] 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@844 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.595 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.853 "name": "raid_bdev1", 00:31:47.853 "uuid": "ec7648b1-bc0f-4f4a-ac09-4dbe54b663cd", 00:31:47.853 "strip_size_kb": 0, 00:31:47.853 "state": "online", 00:31:47.853 "raid_level": "raid1", 00:31:47.853 "superblock": true, 00:31:47.853 "num_base_bdevs": 2, 00:31:47.853 "num_base_bdevs_discovered": 2, 00:31:47.853 "num_base_bdevs_operational": 2, 00:31:47.853 "base_bdevs_list": [ 00:31:47.853 { 00:31:47.853 "name": "spare", 00:31:47.853 "uuid": "11da862c-a7dc-5383-a0db-1491244545a4", 00:31:47.853 "is_configured": true, 00:31:47.853 "data_offset": 2048, 00:31:47.853 "data_size": 63488 00:31:47.853 }, 00:31:47.853 { 00:31:47.853 "name": "BaseBdev2", 00:31:47.853 "uuid": "aa1e393e-c511-5353-a167-93c9312d9945", 00:31:47.853 "is_configured": true, 00:31:47.853 "data_offset": 2048, 00:31:47.853 "data_size": 63488 00:31:47.853 } 00:31:47.853 ] 00:31:47.853 }' 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:47.853 14:14:36 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@847 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests -t 1 00:31:48.112 [2024-07-25 14:14:36.978489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:48.112 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:48.112 Zero copy mechanism will not be used. 00:31:48.112 Running I/O for 1 seconds... 00:31:49.049 00:31:49.049 Latency(us) 00:31:49.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.049 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:49.050 raid_bdev1 : 1.10 28.17 84.50 0.00 0.00 62978.10 333.27 110100.48 00:31:49.050 =================================================================================================================== 00:31:49.050 Total : 28.17 84.50 0.00 0.00 62978.10 333.27 110100.48 00:31:49.308 [2024-07-25 14:14:38.101132] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.308 0 00:31:49.308 14:14:38 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@850 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:49.565 [2024-07-25 14:14:38.381330] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:49.565 [2024-07-25 14:14:38.381379] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:49.565 [2024-07-25 14:14:38.381469] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:49.565 bdevperf: bdev_raid.c:433: raid_bdev_free_base_bdev_resource: Assertion `base_info->configure_cb == NULL' failed. 00:31:56.191 Connection closed with partial response: 00:31:56.191 00:31:56.191 00:31:56.191 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 803: 147524 Aborted (core dumped) "$rootdir/build/examples/bdevperf" -r $rpc_server -T $raid_bdev_name -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@850 -- # trap - ERR 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@850 -- # print_backtrace 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1155 -- # args=('true' '2' 'true' '2' 'raid_add_bdev_without_rebuild' 'raid_add_bdev_without_rebuild_sb') 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1155 -- # local args 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1157 -- # xtrace_disable 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.191 ========== Backtrace start: ========== 00:31:56.191 00:31:56.191 in /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh:850 -> raid_add_bdev_without_rebuild(["2"],["true"]) 00:31:56.191 ... 00:31:56.191 845 00:31:56.191 846 # Start user I/O 00:31:56.191 847 "$rootdir/examples/bdev/bdevperf/bdevperf.py" -s $rpc_server perform_tests -t 1 00:31:56.191 848 00:31:56.191 849 # Stop the RAID bdev 00:31:56.191 => 850 $rpc_py bdev_raid_delete $raid_bdev_name 00:31:56.191 851 [[ $($rpc_py bdev_raid_get_bdevs all | jq 'length') == 0 ]] 00:31:56.191 852 00:31:56.191 853 # Compare data on the added bdev and other base bdevs 00:31:56.191 854 nbd_start_disks $rpc_server "spare" "/dev/nbd0" 00:31:56.191 855 for bdev in "${base_bdevs[@]:1}"; do 00:31:56.191 ... 00:31:56.191 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["raid_add_bdev_without_rebuild_sb"],["raid_add_bdev_without_rebuild"],["2"],["true"]) 00:31:56.191 ... 00:31:56.191 1120 timing_enter $test_name 00:31:56.191 1121 echo "************************************" 00:31:56.191 1122 echo "START TEST $test_name" 00:31:56.191 1123 echo "************************************" 00:31:56.191 1124 xtrace_restore 00:31:56.191 1125 time "$@" 00:31:56.191 1126 xtrace_disable 00:31:56.191 1127 echo "************************************" 00:31:56.191 1128 echo "END TEST $test_name" 00:31:56.191 1129 echo "************************************" 00:31:56.191 1130 timing_exit $test_name 00:31:56.191 ... 00:31:56.191 in /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh:1036 -> main([]) 00:31:56.191 ... 00:31:56.191 1031 run_test "raid_rebuild_test" raid_rebuild_test raid1 $n false false true 00:31:56.191 1032 run_test "raid_rebuild_test_sb" raid_rebuild_test raid1 $n true false true 00:31:56.191 1033 run_test "raid_rebuild_test_io" raid_rebuild_test raid1 $n false true true 00:31:56.191 1034 run_test "raid_rebuild_test_sb_io" raid_rebuild_test raid1 $n true true true 00:31:56.191 1035 run_test "raid_add_bdev_without_rebuild" raid_add_bdev_without_rebuild $n false 00:31:56.191 1036 run_test "raid_add_bdev_without_rebuild_sb" raid_add_bdev_without_rebuild $n true 00:31:56.191 1037 done 00:31:56.191 1038 fi 00:31:56.191 1039 00:31:56.191 1040 if [ "$CONFIG_RAID5F" == y ]; then 00:31:56.191 1041 for n in {3..4}; do 00:31:56.191 ... 00:31:56.191 00:31:56.191 ========== Backtrace end ========== 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- common/autotest_common.sh@1194 -- # return 0 00:31:56.191 00:31:56.191 real 0m13.791s 00:31:56.191 user 0m13.216s 00:31:56.191 sys 0m1.117s 00:31:56.191 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@1 -- # cleanup 00:31:56.192 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@58 -- # '[' -n 147524 ']' 00:31:56.192 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@58 -- # ps -p 147524 00:31:56.192 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:31:56.192 14:14:44 bdev_raid.raid_add_bdev_without_rebuild_sb -- bdev/bdev_raid.sh@1 -- # exit 1 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1125 -- # trap - ERR 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1125 -- # print_backtrace 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1155 -- # args=('/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh' 'bdev_raid' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1155 -- # local args 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1157 -- # xtrace_disable 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:56.192 ========== Backtrace start: ========== 00:31:56.192 00:31:56.192 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["bdev_raid"],["/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh"]) 00:31:56.192 ... 00:31:56.192 1120 timing_enter $test_name 00:31:56.192 1121 echo "************************************" 00:31:56.192 1122 echo "START TEST $test_name" 00:31:56.192 1123 echo "************************************" 00:31:56.192 1124 xtrace_restore 00:31:56.192 1125 time "$@" 00:31:56.192 1126 xtrace_disable 00:31:56.192 1127 echo "************************************" 00:31:56.192 1128 echo "END TEST $test_name" 00:31:56.192 1129 echo "************************************" 00:31:56.192 1130 timing_exit $test_name 00:31:56.192 ... 00:31:56.192 in /home/vagrant/spdk_repo/spdk/autotest.sh:194 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:31:56.192 ... 00:31:56.192 189 run_test "app_cmdline" $rootdir/test/app/cmdline.sh 00:31:56.192 190 run_test "version" $rootdir/test/app/version.sh 00:31:56.192 191 00:31:56.192 192 if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then 00:31:56.192 193 run_test "blockdev_general" $rootdir/test/bdev/blockdev.sh 00:31:56.192 => 194 run_test "bdev_raid" $rootdir/test/bdev/bdev_raid.sh 00:31:56.192 195 run_test "bdevperf_config" $rootdir/test/bdev/bdevperf/test_config.sh 00:31:56.192 196 if [[ $(uname -s) == Linux ]]; then 00:31:56.192 197 run_test "reactor_set_interrupt" $rootdir/test/interrupt/reactor_set_interrupt.sh 00:31:56.192 198 run_test "reap_unregistered_poller" $rootdir/test/interrupt/reap_unregistered_poller.sh 00:31:56.192 199 fi 00:31:56.192 ... 00:31:56.192 00:31:56.192 ========== Backtrace end ========== 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1194 -- # return 0 00:31:56.192 00:31:56.192 real 16m59.491s 00:31:56.192 user 29m19.309s 00:31:56.192 sys 1m59.766s 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1 -- # autotest_cleanup 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:56.192 14:14:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:06.175 ##### CORE BT bdevperf_147524.core.bt.txt ##### 00:32:06.175 00:32:06.175 gdb: warning: Couldn't determine a path for the index cache directory. 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_0 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_1 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_2 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_3 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_4 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_5 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_6 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_7 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_8 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_9 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_10 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_11 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_12 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.175 warning: Can't open file /dev/hugepages/spdk_pid147524map_13 (deleted) during file-backed mapping note processing 00:32:06.175 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_14 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_15 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_16 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_17 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_18 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_19 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_20 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_21 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_22 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_23 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_24 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_25 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_26 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_27 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_28 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_29 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_30 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_31 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_32 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_33 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_34 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_35 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_36 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_37 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_38 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_39 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_40 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_41 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_42 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_43 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_44 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_45 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_46 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_47 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_48 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_49 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_50 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_51 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_52 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_53 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_54 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_55 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_56 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_57 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_58 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_59 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_60 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_61 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_62 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_63 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_64 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_65 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_66 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_67 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_68 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_69 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.176 warning: Can't open file /dev/hugepages/spdk_pid147524map_70 (deleted) during file-backed mapping note processing 00:32:06.176 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_71 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_72 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_73 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_74 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_75 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_76 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_77 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_78 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_79 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_80 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_81 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_82 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_83 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_84 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_85 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_86 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_87 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_88 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_89 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_90 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_91 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_92 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_93 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_94 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_95 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_96 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_97 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_98 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_99 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_100 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_101 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_102 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_103 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_104 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_105 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_106 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_107 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_108 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_109 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_110 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_111 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_112 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_113 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_114 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_115 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_116 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_117 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_118 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_119 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_120 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_121 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_122 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_123 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_124 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_125 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_126 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_127 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.177 warning: Can't open file /dev/hugepages/spdk_pid147524map_128 (deleted) during file-backed mapping note processing 00:32:06.177 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_129 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_130 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_131 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_132 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_133 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_134 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_135 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_136 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_137 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_138 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_139 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_140 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_141 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_142 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_143 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_144 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_145 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_146 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_147 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_148 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_149 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_150 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_151 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_152 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_153 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_154 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_155 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_156 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_157 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_158 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_159 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_160 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_161 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_162 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_163 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_164 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_165 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_166 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_167 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_168 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_169 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_170 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_171 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_172 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_173 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_174 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_175 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_176 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_177 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_178 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_179 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_180 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_181 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_182 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_183 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_184 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_185 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_186 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_187 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_188 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_189 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_190 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_191 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.178 warning: Can't open file /dev/hugepages/spdk_pid147524map_192 (deleted) during file-backed mapping note processing 00:32:06.178 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_193 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_194 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_195 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_196 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_197 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_198 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_199 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_200 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_201 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_202 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_203 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_204 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_205 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_206 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_207 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_208 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_209 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_210 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_211 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_212 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_213 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_214 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_215 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_216 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_217 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_218 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_219 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_220 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_221 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_222 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_223 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_224 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_225 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_226 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_227 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_228 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_229 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_230 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_231 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_232 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_233 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_234 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_235 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_236 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_237 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_238 (deleted) during file-backed mapping note processing 00:32:06.179 00:32:06.179 warning: Can't open file /dev/hugepages/spdk_pid147524map_239 (deleted) during file-backed mapping note processing 00:32:06.179 [New LWP 147524] 00:32:06.179 [New LWP 147526] 00:32:06.179 [Thread debugging using libthread_db enabled] 00:32:06.179 Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 00:32:06.179 Core was generated by `/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock'. 00:32:06.179 Program terminated with signal SIGABRT, Aborted. 00:32:06.179 #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139966974012032) at ./nptl/pthread_kill.c:44 00:32:06.179 44 ./nptl/pthread_kill.c: No such file or directory. 00:32:06.179 [Current thread is 1 (Thread 0x7f4c99c42a80 (LWP 147524))] 00:32:06.179 00:32:06.179 Thread 2 (Thread 0x7f4c963ff640 (LWP 147526)): 00:32:06.179 #0 0x00007f4c9a060e2e in epoll_wait (epfd=6, events=0x7f4c963fd940, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 00:32:06.179 sc_ret = -4 00:32:06.179 sc_cancel_oldtype = 0 00:32:06.179 sc_ret = 00:32:06.179 #1 0x000056135222b4c2 in eal_intr_handle_interrupts (pfd=6, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:32:06.179 events = {{events = 2520767128, data = {ptr = 0x963fdaa000007f4c, fd = 32588, u32 = 32588, u64 = 10826612409951616844}}} 00:32:06.179 nfds = 0 00:32:06.179 #2 0x000056135222b9d4 in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:32:06.179 pipe_event = {events = 3, data = {ptr = 0x4, fd = 4, u32 = 4, u64 = 4}} 00:32:06.179 src = 0x0 00:32:06.179 numfds = 1 00:32:06.179 pfd = 6 00:32:06.179 __func__ = "eal_intr_thread_main" 00:32:06.179 #3 0x00005613521dbcdd in control_thread_start (arg=0x60300002d520) at ../lib/eal/common/eal_common_thread.c:282 00:32:06.179 params = 0x60300002d520 00:32:06.179 start_arg = 0x0 00:32:06.179 start_routine = 0x56135222b5aa 00:32:06.179 #4 0x00005613522146aa in thread_start_wrapper (arg=0x7ffc8db5e790) at ../lib/eal/unix/rte_thread.c:114 00:32:06.179 ctx = 0x7ffc8db5e790 00:32:06.179 thread_func = 0x5613521dbc40 00:32:06.179 thread_args = 0x60300002d520 00:32:06.179 ret = 0 00:32:06.179 #5 0x00007f4c99fcfac3 in start_thread (arg=) at ./nptl/pthread_create.c:442 00:32:06.179 ret = 00:32:06.179 pd = 00:32:06.179 out = 00:32:06.179 unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140722685991936, 8645237266354246360, 139966915016256, 17, 139966977734608, 140722685992288, -8546951371280903464, -8546918928117296424}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} 00:32:06.179 not_first_call = 00:32:06.179 #6 0x00007f4c9a061850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 00:32:06.179 No locals. 00:32:06.179 00:32:06.179 Thread 1 (Thread 0x7f4c99c42a80 (LWP 147524)): 00:32:06.179 #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139966974012032) at ./nptl/pthread_kill.c:44 00:32:06.179 tid = 00:32:06.179 ret = 0 00:32:06.179 pd = 0x7f4c99c42a80 00:32:06.179 old_mask = {__val = {106790067096172, 106790067095872, 106790067096172, 0, 0, 0, 0, 0, 549755813888, 140722685994640, 35184438083584, 0, 35184433954048, 9247231821904913664, 18446744073709551615, 9247231821904913664}} 00:32:06.179 ret = 00:32:06.179 pd = 00:32:06.179 old_mask = 00:32:06.179 ret = 00:32:06.179 tid = 00:32:06.179 ret = 00:32:06.179 resultvar = 00:32:06.179 resultvar = 00:32:06.179 __arg3 = 00:32:06.179 __arg2 = 00:32:06.179 __arg1 = 00:32:06.179 _a3 = 00:32:06.179 _a2 = 00:32:06.179 _a1 = 00:32:06.179 __futex = 00:32:06.179 resultvar = 00:32:06.179 __arg3 = 00:32:06.179 __arg2 = 00:32:06.179 __arg1 = 00:32:06.179 _a3 = 00:32:06.179 _a2 = 00:32:06.179 _a1 = 00:32:06.179 __futex = 00:32:06.179 __private = 00:32:06.180 __oldval = 00:32:06.180 result = 00:32:06.180 #1 __pthread_kill_internal (signo=6, threadid=139966974012032) at ./nptl/pthread_kill.c:78 00:32:06.180 No locals. 00:32:06.180 #2 __GI___pthread_kill (threadid=139966974012032, signo=signo@entry=6) at ./nptl/pthread_kill.c:89 00:32:06.180 No locals. 00:32:06.180 #3 0x00007f4c99f7d476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 00:32:06.180 ret = 00:32:06.180 #4 0x00007f4c99f637f3 in __GI_abort () at ./stdlib/abort.c:79 00:32:06.180 save_stage = 1 00:32:06.180 act = {__sigaction_handler = {sa_handler = 0x60c000007480, sa_sigaction = 0x60c000007480}, sa_mask = {__val = {18, 139966928510976, 0, 4, 139966979060970, 0, 47244640743, 140722685994816, 94640991113600, 0, 9247231821904913664, 5, 139966979098000, 0, 17590335749446, 140722685995024}}, sa_flags = 475309312, sa_restorer = 0x7f4c96c05000} 00:32:06.180 sigs = {__val = {32, 139966979335840, 94640988160672, 433, 94640988162272, 140722685995568, 139966974012032, 139966977644782, 206158430256, 140722685995040, 206158430232, 140722685995040, 140722685994832, 9247231821904913664, 140722686000129, 139966979061410}} 00:32:06.180 #5 0x00007f4c99f6371b in __assert_fail_base (fmt=0x7f4c9a118130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5613527b08e0 "base_info->configure_cb == NULL", file=0x5613527b02a0 "bdev_raid.c", line=433, function=) at ./assert/assert.c:92 00:32:06.180 str = 0x60c000007480 "\026" 00:32:06.180 total = 4096 00:32:06.180 #6 0x00007f4c99f74e96 in __GI___assert_fail (assertion=0x5613527b08e0 "base_info->configure_cb == NULL", file=0x5613527b02a0 "bdev_raid.c", line=433, function=0x5613527b4a60 <__PRETTY_FUNCTION__.84> "raid_bdev_free_base_bdev_resource") at ./assert/assert.c:101 00:32:06.180 No locals. 00:32:06.180 #7 0x000056135189cc84 in raid_bdev_free_base_bdev_resource (base_info=0x611000016840) at bdev_raid.c:433 00:32:06.180 raid_bdev = 0x617000012a00 00:32:06.180 __PRETTY_FUNCTION__ = "raid_bdev_free_base_bdev_resource" 00:32:06.180 #8 0x000056135189da66 in _raid_bdev_destruct (ctxt=0x617000012a00) at bdev_raid.c:497 00:32:06.180 raid_bdev = 0x617000012a00 00:32:06.180 base_info = 0x611000016840 00:32:06.180 __func__ = "_raid_bdev_destruct" 00:32:06.180 __PRETTY_FUNCTION__ = "_raid_bdev_destruct" 00:32:06.180 #9 0x000056135189791d in spdk_thread_exec_msg (thread=0x619000006480, fn=0x56135189d6ff <_raid_bdev_destruct>, ctx=0x617000012a00) at /home/vagrant/spdk_repo/spdk/include/spdk/thread.h:550 00:32:06.180 __PRETTY_FUNCTION__ = "spdk_thread_exec_msg" 00:32:06.180 #10 0x000056135189e000 in raid_bdev_destruct (ctx=0x617000012a00) at bdev_raid.c:517 00:32:06.180 No locals. 00:32:06.180 #11 0x0000561351f86223 in bdev_destroy_cb (io_device=0x617000012a01) at bdev.c:7852 00:32:06.180 rc = 24944 00:32:06.180 bdev = 0x617000012a00 00:32:06.180 cb_fn = 0x5613518d1266 00:32:06.180 cb_arg = 0x602000004e70 00:32:06.180 __func__ = "bdev_destroy_cb" 00:32:06.180 #12 0x000056135209f2d7 in _finish_unregister (arg=0x613000004d40) at thread.c:2200 00:32:06.180 dev = 0x613000004d40 00:32:06.180 thread = 0x619000006480 00:32:06.180 __PRETTY_FUNCTION__ = "_finish_unregister" 00:32:06.180 __func__ = "_finish_unregister" 00:32:06.180 #13 0x000056135208df51 in msg_queue_run_batch (thread=0x619000006480, max_msgs=8) at thread.c:858 00:32:06.180 msg = 0x20000405f380 00:32:06.180 count = 1 00:32:06.180 i = 0 00:32:06.180 messages = {0x20000405f380, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0} 00:32:06.180 notify = 1 00:32:06.180 rc = 927 00:32:06.180 __func__ = "msg_queue_run_batch" 00:32:06.180 __PRETTY_FUNCTION__ = "msg_queue_run_batch" 00:32:06.180 #14 0x0000561352092784 in thread_poll (thread=0x619000006480, max_msgs=0, now=3985132448583) at thread.c:1080 00:32:06.180 msg_count = 0 00:32:06.180 poller = 0x0 00:32:06.180 tmp = 0x0 00:32:06.180 critical_msg = 0x0 00:32:06.180 rc = 0 00:32:06.180 #15 0x0000561352093b36 in spdk_thread_poll (thread=0x619000006480, max_msgs=0, now=3985132448583) at thread.c:1177 00:32:06.180 orig_thread = 0x0 00:32:06.180 rc = 4095 00:32:06.180 #16 0x0000561351edf819 in _reactor_run (reactor=0x617000012680) at reactor.c:918 00:32:06.180 thread = 0x619000006480 00:32:06.180 lw_thread = 0x6190000067c8 00:32:06.180 tmp = 0x0 00:32:06.180 now = 3985132448583 00:32:06.180 rc = 1 00:32:06.180 #17 0x0000561351ee0155 in reactor_run (arg=0x617000012680) at reactor.c:956 00:32:06.180 reactor = 0x617000012680 00:32:06.180 thread = 0x0 00:32:06.180 lw_thread = 0x617000012680 00:32:06.180 tmp = 0x0 00:32:06.180 thread_name = "reactor_0\000\265\215\374\177\000\000\320\354\265\215\374\177\000\000\366T\033R\023V\000" 00:32:06.180 last_sched = 0 00:32:06.180 __func__ = "reactor_run" 00:32:06.180 #18 0x0000561351ee10b1 in spdk_reactors_start () at reactor.c:1072 00:32:06.180 reactor = 0x617000012680 00:32:06.180 i = 4294967295 00:32:06.180 current_core = 0 00:32:06.180 rc = 0 00:32:06.180 __func__ = "spdk_reactors_start" 00:32:06.180 __PRETTY_FUNCTION__ = "spdk_reactors_start" 00:32:06.180 #19 0x0000561351ecf626 in spdk_app_start (opts_user=0x7ffc8db5f1b0, start_fn=0x5613517a250e , arg1=0x0) at app.c:980 00:32:06.180 rc = 0 00:32:06.180 tty = 0x0 00:32:06.180 tmp_cpumask = {str = '\000' , cpus = "\001", '\000' } 00:32:06.180 g_env_was_setup = false 00:32:06.180 opts_local = {name = 0x56135274f060 "bdevperf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7ffc8db5fc0d "/var/tmp/spdk-raid.sock", reactor_mask = 0x5613529b00a0 "0x1", tpoint_group_mask = 0x0, shm_id = -1, reserved52 = "\000\000\000", shutdown_cb = 0x5613517a355c , enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_DEBUG, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 253, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 262143, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0, disable_cpumask_locks = false} 00:32:06.180 opts = 0x7ffc8db5ee00 00:32:06.180 i = 128 00:32:06.180 core = 4294967295 00:32:06.180 __func__ = "spdk_app_start" 00:32:06.180 #20 0x00005613517a51e3 in main (argc=19, argv=0x7ffc8db5f438) at bdevperf.c:2887 00:32:06.180 opts = {name = 0x56135274f060 "bdevperf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7ffc8db5fc0d "/var/tmp/spdk-raid.sock", reactor_mask = 0x0, tpoint_group_mask = 0x0, shm_id = -1, reserved52 = "\000\000\000", shutdown_cb = 0x5613517a355c , enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_DEBUG, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 253, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 0, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0, disable_cpumask_locks = false} 00:32:06.180 rc = 1 00:32:06.180 00:32:06.180 -- 00:32:06.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:06.748 Waiting for block devices as requested 00:32:07.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:07.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:07.265 Cleaning 00:32:07.265 Removing: /var/run/dpdk/spdk0/config 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:07.265 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:07.265 Removing: /dev/shm/bdevperf_trace.pid147524 00:32:07.265 Removing: /dev/shm/spdk_tgt_trace.pid112164 00:32:07.265 Removing: /var/tmp/spdk_cpu_lock_000 00:32:07.265 Removing: /var/run/dpdk/spdk0 00:32:07.265 Removing: /var/run/dpdk/spdk_pid111921 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112164 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112409 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112528 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112587 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112728 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112751 00:32:07.265 Removing: /var/run/dpdk/spdk_pid112915 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113188 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113381 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113489 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113602 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113728 00:32:07.265 Removing: /var/run/dpdk/spdk_pid113830 00:32:07.523 Removing: /var/run/dpdk/spdk_pid113885 00:32:07.523 Removing: /var/run/dpdk/spdk_pid113935 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114008 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114133 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114679 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114764 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114843 00:32:07.523 Removing: /var/run/dpdk/spdk_pid114866 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115029 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115049 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115207 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115228 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115297 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115327 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115398 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115426 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115632 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115682 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115730 00:32:07.523 Removing: /var/run/dpdk/spdk_pid115826 00:32:07.523 Removing: /var/run/dpdk/spdk_pid116009 00:32:07.523 Removing: /var/run/dpdk/spdk_pid116098 00:32:07.523 Removing: /var/run/dpdk/spdk_pid116166 00:32:07.523 Removing: /var/run/dpdk/spdk_pid117442 00:32:07.523 Removing: /var/run/dpdk/spdk_pid117673 00:32:07.523 Removing: /var/run/dpdk/spdk_pid117884 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118014 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118166 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118246 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118283 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118317 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118794 00:32:07.523 Removing: /var/run/dpdk/spdk_pid118896 00:32:07.523 Removing: /var/run/dpdk/spdk_pid119015 00:32:07.523 Removing: /var/run/dpdk/spdk_pid119080 00:32:07.523 Removing: /var/run/dpdk/spdk_pid120855 00:32:07.523 Removing: /var/run/dpdk/spdk_pid121236 00:32:07.523 Removing: /var/run/dpdk/spdk_pid121439 00:32:07.523 Removing: /var/run/dpdk/spdk_pid122426 00:32:07.523 Removing: /var/run/dpdk/spdk_pid122811 00:32:07.523 Removing: /var/run/dpdk/spdk_pid123013 00:32:07.523 Removing: /var/run/dpdk/spdk_pid124000 00:32:07.523 Removing: /var/run/dpdk/spdk_pid124566 00:32:07.523 Removing: /var/run/dpdk/spdk_pid124762 00:32:07.523 Removing: /var/run/dpdk/spdk_pid127022 00:32:07.523 Removing: /var/run/dpdk/spdk_pid127543 00:32:07.523 Removing: /var/run/dpdk/spdk_pid127749 00:32:07.523 Removing: /var/run/dpdk/spdk_pid129993 00:32:07.523 Removing: /var/run/dpdk/spdk_pid130501 00:32:07.523 Removing: /var/run/dpdk/spdk_pid130706 00:32:07.523 Removing: /var/run/dpdk/spdk_pid132937 00:32:07.523 Removing: /var/run/dpdk/spdk_pid133730 00:32:07.523 Removing: /var/run/dpdk/spdk_pid133937 00:32:07.523 Removing: /var/run/dpdk/spdk_pid136435 00:32:07.523 Removing: /var/run/dpdk/spdk_pid136995 00:32:07.523 Removing: /var/run/dpdk/spdk_pid137212 00:32:07.523 Removing: /var/run/dpdk/spdk_pid139721 00:32:07.523 Removing: /var/run/dpdk/spdk_pid140290 00:32:07.523 Removing: /var/run/dpdk/spdk_pid140520 00:32:07.523 Removing: /var/run/dpdk/spdk_pid142993 00:32:07.523 Removing: /var/run/dpdk/spdk_pid143862 00:32:07.523 Removing: /var/run/dpdk/spdk_pid144089 00:32:07.523 Removing: /var/run/dpdk/spdk_pid144304 00:32:07.523 Removing: /var/run/dpdk/spdk_pid144872 00:32:07.523 Removing: /var/run/dpdk/spdk_pid145849 00:32:07.523 Removing: /var/run/dpdk/spdk_pid146334 00:32:07.523 Removing: /var/run/dpdk/spdk_pid147242 00:32:07.523 Removing: /var/run/dpdk/spdk_pid147524 00:32:07.523 Clean 00:32:07.781 14:14:56 bdev_raid -- common/autotest_common.sh@1451 -- # return 1 00:32:07.781 14:14:56 bdev_raid -- common/autotest_common.sh@1 -- # : 00:32:07.781 14:14:56 bdev_raid -- common/autotest_common.sh@1 -- # exit 1 00:32:08.049 [Pipeline] } 00:32:08.071 [Pipeline] // timeout 00:32:08.078 [Pipeline] } 00:32:08.099 [Pipeline] // stage 00:32:08.105 [Pipeline] } 00:32:08.107 ERROR: script returned exit code 1 00:32:08.108 Setting overall build result to FAILURE 00:32:08.119 [Pipeline] // catchError 00:32:08.128 [Pipeline] stage 00:32:08.130 [Pipeline] { (Stop VM) 00:32:08.144 [Pipeline] sh 00:32:08.423 + vagrant halt 00:32:12.629 ==> default: Halting domain... 00:32:22.629 [Pipeline] sh 00:32:22.907 + vagrant destroy -f 00:32:27.089 ==> default: Removing domain... 00:32:27.100 [Pipeline] sh 00:32:27.379 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_3/output 00:32:27.388 [Pipeline] } 00:32:27.404 [Pipeline] // stage 00:32:27.410 [Pipeline] } 00:32:27.426 [Pipeline] // dir 00:32:27.431 [Pipeline] } 00:32:27.447 [Pipeline] // wrap 00:32:27.453 [Pipeline] } 00:32:27.466 [Pipeline] // catchError 00:32:27.475 [Pipeline] stage 00:32:27.476 [Pipeline] { (Epilogue) 00:32:27.488 [Pipeline] sh 00:32:27.766 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:42.645 [Pipeline] catchError 00:32:42.647 [Pipeline] { 00:32:42.661 [Pipeline] sh 00:32:42.977 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:42.977 Artifacts sizes are good 00:32:42.985 [Pipeline] } 00:32:43.003 [Pipeline] // catchError 00:32:43.014 [Pipeline] archiveArtifacts 00:32:43.021 Archiving artifacts 00:32:47.056 [Pipeline] cleanWs 00:32:47.068 [WS-CLEANUP] Deleting project workspace... 00:32:47.068 [WS-CLEANUP] Deferred wipeout is used... 00:32:47.075 [WS-CLEANUP] done 00:32:47.077 [Pipeline] } 00:32:47.096 [Pipeline] // stage 00:32:47.102 [Pipeline] } 00:32:47.120 [Pipeline] // node 00:32:47.126 [Pipeline] End of Pipeline 00:32:47.161 Finished: FAILURE